Test Report: Docker_Linux_crio 22101

                    
                      e65f928d8ebd0537e3fd5f2753f43f3d5796d0a1:2025-12-12:42734
                    
                

Test fail (27/415)

x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758245 addons disable volcano --alsologtostderr -v=1: exit status 11 (240.974152ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 23:57:40.900085   24136 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:57:40.900252   24136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:57:40.900262   24136 out.go:374] Setting ErrFile to fd 2...
	I1211 23:57:40.900267   24136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:57:40.900439   24136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:57:40.900679   24136 mustload.go:66] Loading cluster: addons-758245
	I1211 23:57:40.900992   24136 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:57:40.901022   24136 addons.go:622] checking whether the cluster is paused
	I1211 23:57:40.901101   24136 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:57:40.901112   24136 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:57:40.901462   24136 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:57:40.918986   24136 ssh_runner.go:195] Run: systemctl --version
	I1211 23:57:40.919031   24136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:57:40.935638   24136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:57:41.028439   24136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:57:41.028539   24136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:57:41.057380   24136 cri.go:89] found id: "3732d2a3f883872d29a8fa95430ad51d25831fcfea4e55c7896a327d96599bcf"
	I1211 23:57:41.057397   24136 cri.go:89] found id: "eb9ef1b7664a78fa036df7cd1e340b260c6103f0043a248e15b8c4f874624584"
	I1211 23:57:41.057401   24136 cri.go:89] found id: "569f61ef928d0f8bcfb0087922cfff40b1d5e27ac23e6c818e3a160cce92903a"
	I1211 23:57:41.057404   24136 cri.go:89] found id: "c3b2706600ae15c248169ff8e9696d286596bb41798aadd3ed7d5f04ddeaeb09"
	I1211 23:57:41.057415   24136 cri.go:89] found id: "664ac9c24ea7c293c526b1eee38f31561cb464c0f1fb15991e83439838893a9f"
	I1211 23:57:41.057419   24136 cri.go:89] found id: "64e96cfc8140e0ea508cbaa4b1f46f6a77ac38ca8dedd5a6ac707b7d8a935013"
	I1211 23:57:41.057422   24136 cri.go:89] found id: "335878b08bc5952d78cfe80774396d448d7e43cfc4df78fe9b41b92ea329360a"
	I1211 23:57:41.057426   24136 cri.go:89] found id: "6cbca1534843d1584cdf90b2c3a08b630e41bed36ba8785830272efa337811be"
	I1211 23:57:41.057431   24136 cri.go:89] found id: "e70723d59b2b179aeebdf126b5c686f98bc9e70ce305f547af722fcfa5f6570b"
	I1211 23:57:41.057439   24136 cri.go:89] found id: "7e0fe81bd1c04f2678ef0211090a7a638b8f1cde3d56e0048932736929dbfa94"
	I1211 23:57:41.057447   24136 cri.go:89] found id: "a69fa017a8d1a4e1e016c291adabf69f06139db6064f64f4696832dec41770b5"
	I1211 23:57:41.057452   24136 cri.go:89] found id: "8e6d8441c0b88b4af7d7ace2ca8455346456cb30cd8105a796edc75453a87f91"
	I1211 23:57:41.057460   24136 cri.go:89] found id: "aeaae182100e69fb94d63381a76545d31a0d9d27e8df794dcf898674b0374925"
	I1211 23:57:41.057464   24136 cri.go:89] found id: "b246671cb7ebd1332e264290b1d14d25b693cac91d9b52fd3540285bf08677f6"
	I1211 23:57:41.057467   24136 cri.go:89] found id: "533b2e5a9c2d28cc131486b31c27bcf25934c32a40a567192995ee18952d1f74"
	I1211 23:57:41.057485   24136 cri.go:89] found id: "138f9c8dcb50c0ecbf735235efdec9c471f54bceb1f758e03d1f66245eb03e52"
	I1211 23:57:41.057494   24136 cri.go:89] found id: "a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e"
	I1211 23:57:41.057499   24136 cri.go:89] found id: "9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98"
	I1211 23:57:41.057504   24136 cri.go:89] found id: "746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7"
	I1211 23:57:41.057509   24136 cri.go:89] found id: "9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc"
	I1211 23:57:41.057517   24136 cri.go:89] found id: "c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69"
	I1211 23:57:41.057524   24136 cri.go:89] found id: "47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e"
	I1211 23:57:41.057528   24136 cri.go:89] found id: "49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47"
	I1211 23:57:41.057534   24136 cri.go:89] found id: "0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead"
	I1211 23:57:41.057537   24136 cri.go:89] found id: ""
	I1211 23:57:41.057571   24136 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 23:57:41.070784   24136 out.go:203] 
	W1211 23:57:41.071919   24136 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:57:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:57:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1211 23:57:41.071939   24136 out.go:285] * 
	* 
	W1211 23:57:41.074803   24136 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 23:57:41.075956   24136 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-758245 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.217824ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-ctpbw" [8f86a5b8-2ea9-4839-9462-cfe7303189b3] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002254229s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-9n7hj" [1f9bf5f3-1436-4f37-97c7-503e1b285750] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003365911s
addons_test.go:394: (dbg) Run:  kubectl --context addons-758245 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-758245 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-758245 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.209657748s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 ip
2025/12/11 23:58:02 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758245 addons disable registry --alsologtostderr -v=1: exit status 11 (233.410078ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 23:58:02.305032   26565 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:58:02.305324   26565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:02.305334   26565 out.go:374] Setting ErrFile to fd 2...
	I1211 23:58:02.305339   26565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:02.305563   26565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:58:02.305803   26565 mustload.go:66] Loading cluster: addons-758245
	I1211 23:58:02.306075   26565 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:02.306090   26565 addons.go:622] checking whether the cluster is paused
	I1211 23:58:02.306170   26565 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:02.306181   26565 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:58:02.306576   26565 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:58:02.323439   26565 ssh_runner.go:195] Run: systemctl --version
	I1211 23:58:02.323497   26565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:58:02.339797   26565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:58:02.432657   26565 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:58:02.432742   26565 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:58:02.462239   26565 cri.go:89] found id: "3732d2a3f883872d29a8fa95430ad51d25831fcfea4e55c7896a327d96599bcf"
	I1211 23:58:02.462266   26565 cri.go:89] found id: "eb9ef1b7664a78fa036df7cd1e340b260c6103f0043a248e15b8c4f874624584"
	I1211 23:58:02.462272   26565 cri.go:89] found id: "569f61ef928d0f8bcfb0087922cfff40b1d5e27ac23e6c818e3a160cce92903a"
	I1211 23:58:02.462278   26565 cri.go:89] found id: "c3b2706600ae15c248169ff8e9696d286596bb41798aadd3ed7d5f04ddeaeb09"
	I1211 23:58:02.462283   26565 cri.go:89] found id: "664ac9c24ea7c293c526b1eee38f31561cb464c0f1fb15991e83439838893a9f"
	I1211 23:58:02.462290   26565 cri.go:89] found id: "64e96cfc8140e0ea508cbaa4b1f46f6a77ac38ca8dedd5a6ac707b7d8a935013"
	I1211 23:58:02.462295   26565 cri.go:89] found id: "335878b08bc5952d78cfe80774396d448d7e43cfc4df78fe9b41b92ea329360a"
	I1211 23:58:02.462300   26565 cri.go:89] found id: "6cbca1534843d1584cdf90b2c3a08b630e41bed36ba8785830272efa337811be"
	I1211 23:58:02.462304   26565 cri.go:89] found id: "e70723d59b2b179aeebdf126b5c686f98bc9e70ce305f547af722fcfa5f6570b"
	I1211 23:58:02.462324   26565 cri.go:89] found id: "7e0fe81bd1c04f2678ef0211090a7a638b8f1cde3d56e0048932736929dbfa94"
	I1211 23:58:02.462333   26565 cri.go:89] found id: "a69fa017a8d1a4e1e016c291adabf69f06139db6064f64f4696832dec41770b5"
	I1211 23:58:02.462338   26565 cri.go:89] found id: "8e6d8441c0b88b4af7d7ace2ca8455346456cb30cd8105a796edc75453a87f91"
	I1211 23:58:02.462345   26565 cri.go:89] found id: "aeaae182100e69fb94d63381a76545d31a0d9d27e8df794dcf898674b0374925"
	I1211 23:58:02.462351   26565 cri.go:89] found id: "b246671cb7ebd1332e264290b1d14d25b693cac91d9b52fd3540285bf08677f6"
	I1211 23:58:02.462360   26565 cri.go:89] found id: "533b2e5a9c2d28cc131486b31c27bcf25934c32a40a567192995ee18952d1f74"
	I1211 23:58:02.462375   26565 cri.go:89] found id: "138f9c8dcb50c0ecbf735235efdec9c471f54bceb1f758e03d1f66245eb03e52"
	I1211 23:58:02.462386   26565 cri.go:89] found id: "a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e"
	I1211 23:58:02.462392   26565 cri.go:89] found id: "9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98"
	I1211 23:58:02.462397   26565 cri.go:89] found id: "746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7"
	I1211 23:58:02.462401   26565 cri.go:89] found id: "9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc"
	I1211 23:58:02.462406   26565 cri.go:89] found id: "c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69"
	I1211 23:58:02.462410   26565 cri.go:89] found id: "47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e"
	I1211 23:58:02.462415   26565 cri.go:89] found id: "49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47"
	I1211 23:58:02.462419   26565 cri.go:89] found id: "0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead"
	I1211 23:58:02.462424   26565 cri.go:89] found id: ""
	I1211 23:58:02.462462   26565 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 23:58:02.475766   26565 out.go:203] 
	W1211 23:58:02.476889   26565 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1211 23:58:02.476906   26565 out.go:285] * 
	* 
	W1211 23:58:02.479858   26565 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 23:58:02.481042   26565 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-758245 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (12.65s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.088901ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-758245
addons_test.go:334: (dbg) Run:  kubectl --context addons-758245 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758245 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (312.142984ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 23:58:02.700055   26676 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:58:02.700182   26676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:02.700191   26676 out.go:374] Setting ErrFile to fd 2...
	I1211 23:58:02.700195   26676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:02.700418   26676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:58:02.700698   26676 mustload.go:66] Loading cluster: addons-758245
	I1211 23:58:02.700995   26676 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:02.701023   26676 addons.go:622] checking whether the cluster is paused
	I1211 23:58:02.701104   26676 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:02.701115   26676 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:58:02.701499   26676 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:58:02.718800   26676 ssh_runner.go:195] Run: systemctl --version
	I1211 23:58:02.718840   26676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:58:02.735126   26676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:58:02.829655   26676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:58:02.829752   26676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:58:02.857034   26676 cri.go:89] found id: "3732d2a3f883872d29a8fa95430ad51d25831fcfea4e55c7896a327d96599bcf"
	I1211 23:58:02.857060   26676 cri.go:89] found id: "eb9ef1b7664a78fa036df7cd1e340b260c6103f0043a248e15b8c4f874624584"
	I1211 23:58:02.857066   26676 cri.go:89] found id: "569f61ef928d0f8bcfb0087922cfff40b1d5e27ac23e6c818e3a160cce92903a"
	I1211 23:58:02.857071   26676 cri.go:89] found id: "c3b2706600ae15c248169ff8e9696d286596bb41798aadd3ed7d5f04ddeaeb09"
	I1211 23:58:02.857076   26676 cri.go:89] found id: "664ac9c24ea7c293c526b1eee38f31561cb464c0f1fb15991e83439838893a9f"
	I1211 23:58:02.857081   26676 cri.go:89] found id: "64e96cfc8140e0ea508cbaa4b1f46f6a77ac38ca8dedd5a6ac707b7d8a935013"
	I1211 23:58:02.857084   26676 cri.go:89] found id: "335878b08bc5952d78cfe80774396d448d7e43cfc4df78fe9b41b92ea329360a"
	I1211 23:58:02.857087   26676 cri.go:89] found id: "6cbca1534843d1584cdf90b2c3a08b630e41bed36ba8785830272efa337811be"
	I1211 23:58:02.857089   26676 cri.go:89] found id: "e70723d59b2b179aeebdf126b5c686f98bc9e70ce305f547af722fcfa5f6570b"
	I1211 23:58:02.857098   26676 cri.go:89] found id: "7e0fe81bd1c04f2678ef0211090a7a638b8f1cde3d56e0048932736929dbfa94"
	I1211 23:58:02.857104   26676 cri.go:89] found id: "a69fa017a8d1a4e1e016c291adabf69f06139db6064f64f4696832dec41770b5"
	I1211 23:58:02.857107   26676 cri.go:89] found id: "8e6d8441c0b88b4af7d7ace2ca8455346456cb30cd8105a796edc75453a87f91"
	I1211 23:58:02.857110   26676 cri.go:89] found id: "aeaae182100e69fb94d63381a76545d31a0d9d27e8df794dcf898674b0374925"
	I1211 23:58:02.857113   26676 cri.go:89] found id: "b246671cb7ebd1332e264290b1d14d25b693cac91d9b52fd3540285bf08677f6"
	I1211 23:58:02.857116   26676 cri.go:89] found id: "533b2e5a9c2d28cc131486b31c27bcf25934c32a40a567192995ee18952d1f74"
	I1211 23:58:02.857133   26676 cri.go:89] found id: "138f9c8dcb50c0ecbf735235efdec9c471f54bceb1f758e03d1f66245eb03e52"
	I1211 23:58:02.857144   26676 cri.go:89] found id: "a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e"
	I1211 23:58:02.857151   26676 cri.go:89] found id: "9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98"
	I1211 23:58:02.857155   26676 cri.go:89] found id: "746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7"
	I1211 23:58:02.857159   26676 cri.go:89] found id: "9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc"
	I1211 23:58:02.857167   26676 cri.go:89] found id: "c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69"
	I1211 23:58:02.857175   26676 cri.go:89] found id: "47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e"
	I1211 23:58:02.857180   26676 cri.go:89] found id: "49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47"
	I1211 23:58:02.857187   26676 cri.go:89] found id: "0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead"
	I1211 23:58:02.857190   26676 cri.go:89] found id: ""
	I1211 23:58:02.857239   26676 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 23:58:02.906914   26676 out.go:203] 
	W1211 23:58:02.942663   26676 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1211 23:58:02.942688   26676 out.go:285] * 
	* 
	W1211 23:58:02.945766   26676 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 23:58:02.948790   26676 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-758245 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (150.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-758245 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-758245 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-758245 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [74065274-e5a6-4b9c-b79a-6fcf18bdb885] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [74065274-e5a6-4b9c-b79a-6fcf18bdb885] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003184877s
I1211 23:58:12.800325   14503 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758245 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.349958018s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-758245 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-758245
helpers_test.go:244: (dbg) docker inspect addons-758245:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "188206e030f11f1c5a3dcad2a126d87ace8d07e7c5574730998ee297f8402c04",
	        "Created": "2025-12-11T23:55:59.936546688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T23:55:59.970959441Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/188206e030f11f1c5a3dcad2a126d87ace8d07e7c5574730998ee297f8402c04/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/188206e030f11f1c5a3dcad2a126d87ace8d07e7c5574730998ee297f8402c04/hostname",
	        "HostsPath": "/var/lib/docker/containers/188206e030f11f1c5a3dcad2a126d87ace8d07e7c5574730998ee297f8402c04/hosts",
	        "LogPath": "/var/lib/docker/containers/188206e030f11f1c5a3dcad2a126d87ace8d07e7c5574730998ee297f8402c04/188206e030f11f1c5a3dcad2a126d87ace8d07e7c5574730998ee297f8402c04-json.log",
	        "Name": "/addons-758245",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-758245:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-758245",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "188206e030f11f1c5a3dcad2a126d87ace8d07e7c5574730998ee297f8402c04",
	                "LowerDir": "/var/lib/docker/overlay2/24ea4780e7b4c26af7d263dbb2b4589d666aed6254f0dc74fdcbd2979e0db87a-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/24ea4780e7b4c26af7d263dbb2b4589d666aed6254f0dc74fdcbd2979e0db87a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/24ea4780e7b4c26af7d263dbb2b4589d666aed6254f0dc74fdcbd2979e0db87a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/24ea4780e7b4c26af7d263dbb2b4589d666aed6254f0dc74fdcbd2979e0db87a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-758245",
	                "Source": "/var/lib/docker/volumes/addons-758245/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-758245",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-758245",
	                "name.minikube.sigs.k8s.io": "addons-758245",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "41c22fc77b15063a4a597947417e13bb5a60f28755aef588fb8ddba38cf6acb6",
	            "SandboxKey": "/var/run/docker/netns/41c22fc77b15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-758245": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8161cef5b05612868a973846e114be2fd7b210990d214b28e9e242051787a510",
	                    "EndpointID": "127e82bcd5295bbd0571f6b12e0ef42de13f73a6d84f53fafcbc0b92b289978e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "c2:56:be:f5:10:4f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-758245",
	                        "188206e030f1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-758245 -n addons-758245
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-758245 logs -n 25: (1.091144613s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-307462 --alsologtostderr --binary-mirror http://127.0.0.1:33495 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-307462 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ -p binary-mirror-307462                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-307462 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ addons  │ enable dashboard -p addons-758245                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-758245                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ start   │ -p addons-758245 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:57 UTC │
	│ addons  │ addons-758245 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:57 UTC │                     │
	│ addons  │ addons-758245 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:57 UTC │                     │
	│ addons  │ enable headlamp -p addons-758245 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:57 UTC │                     │
	│ addons  │ addons-758245 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:57 UTC │                     │
	│ addons  │ addons-758245 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:57 UTC │                     │
	│ ssh     │ addons-758245 ssh cat /opt/local-path-provisioner/pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:57 UTC │ 11 Dec 25 23:57 UTC │
	│ addons  │ addons-758245 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:57 UTC │                     │
	│ addons  │ addons-758245 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:58 UTC │                     │
	│ ip      │ addons-758245 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:58 UTC │ 11 Dec 25 23:58 UTC │
	│ addons  │ addons-758245 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:58 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-758245                                                                                                                                                                                                                                                                                                                                                                                           │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:58 UTC │ 11 Dec 25 23:58 UTC │
	│ addons  │ addons-758245 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:58 UTC │                     │
	│ addons  │ addons-758245 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:58 UTC │                     │
	│ addons  │ addons-758245 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:58 UTC │                     │
	│ addons  │ addons-758245 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:58 UTC │                     │
	│ ssh     │ addons-758245 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:58 UTC │                     │
	│ addons  │ addons-758245 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:58 UTC │                     │
	│ addons  │ addons-758245 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:58 UTC │                     │
	│ addons  │ addons-758245 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-758245        │ jenkins │ v1.37.0 │ 11 Dec 25 23:58 UTC │                     │
	│ ip      │ addons-758245 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-758245        │ jenkins │ v1.37.0 │ 12 Dec 25 00:00 UTC │ 12 Dec 25 00:00 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 23:55:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:55:37.451647   16270 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:55:37.451737   16270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:37.451745   16270 out.go:374] Setting ErrFile to fd 2...
	I1211 23:55:37.451749   16270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:37.451957   16270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:55:37.452424   16270 out.go:368] Setting JSON to false
	I1211 23:55:37.453183   16270 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2283,"bootTime":1765495054,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:55:37.453266   16270 start.go:143] virtualization: kvm guest
	I1211 23:55:37.455001   16270 out.go:179] * [addons-758245] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1211 23:55:37.456446   16270 notify.go:221] Checking for updates...
	I1211 23:55:37.456459   16270 out.go:179]   - MINIKUBE_LOCATION=22101
	I1211 23:55:37.457617   16270 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:55:37.458710   16270 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1211 23:55:37.459737   16270 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1211 23:55:37.460752   16270 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1211 23:55:37.461731   16270 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 23:55:37.462744   16270 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 23:55:37.485733   16270 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1211 23:55:37.485817   16270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:55:37.536654   16270 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-11 23:55:37.527785954 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1211 23:55:37.536745   16270 docker.go:319] overlay module found
	I1211 23:55:37.538254   16270 out.go:179] * Using the docker driver based on user configuration
	I1211 23:55:37.539174   16270 start.go:309] selected driver: docker
	I1211 23:55:37.539184   16270 start.go:927] validating driver "docker" against <nil>
	I1211 23:55:37.539195   16270 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 23:55:37.539769   16270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:55:37.587801   16270 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-11 23:55:37.579390861 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1211 23:55:37.587932   16270 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1211 23:55:37.588149   16270 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:55:37.589581   16270 out.go:179] * Using Docker driver with root privileges
	I1211 23:55:37.590674   16270 cni.go:84] Creating CNI manager for ""
	I1211 23:55:37.590747   16270 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 23:55:37.590761   16270 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1211 23:55:37.590829   16270 start.go:353] cluster config:
	{Name:addons-758245 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-758245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1211 23:55:37.591895   16270 out.go:179] * Starting "addons-758245" primary control-plane node in "addons-758245" cluster
	I1211 23:55:37.592757   16270 cache.go:134] Beginning downloading kic base image for docker with crio
	I1211 23:55:37.593740   16270 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1211 23:55:37.594736   16270 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 23:55:37.594762   16270 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1211 23:55:37.594768   16270 cache.go:65] Caching tarball of preloaded images
	I1211 23:55:37.594827   16270 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 23:55:37.594857   16270 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1211 23:55:37.594865   16270 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1211 23:55:37.595204   16270 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/config.json ...
	I1211 23:55:37.595232   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/config.json: {Name:mk6ff817bdab43c8ad5af9ad2a96e675f76a8d11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:55:37.610032   16270 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1211 23:55:37.610121   16270 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1211 23:55:37.610154   16270 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1211 23:55:37.610164   16270 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1211 23:55:37.610188   16270 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1211 23:55:37.610193   16270 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	I1211 23:55:51.379114   16270 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1211 23:55:51.379154   16270 cache.go:243] Successfully downloaded all kic artifacts
	I1211 23:55:51.379191   16270 start.go:360] acquireMachinesLock for addons-758245: {Name:mk3bbf18ce1e2e085e94a157b7afb1e5e505c9fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:55:51.379289   16270 start.go:364] duration metric: took 78.675µs to acquireMachinesLock for "addons-758245"
	I1211 23:55:51.379313   16270 start.go:93] Provisioning new machine with config: &{Name:addons-758245 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-758245 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:55:51.379389   16270 start.go:125] createHost starting for "" (driver="docker")
	I1211 23:55:51.437889   16270 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1211 23:55:51.438139   16270 start.go:159] libmachine.API.Create for "addons-758245" (driver="docker")
	I1211 23:55:51.438168   16270 client.go:173] LocalClient.Create starting
	I1211 23:55:51.438289   16270 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem
	I1211 23:55:51.530498   16270 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem
	I1211 23:55:51.560510   16270 cli_runner.go:164] Run: docker network inspect addons-758245 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1211 23:55:51.578249   16270 cli_runner.go:211] docker network inspect addons-758245 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1211 23:55:51.578312   16270 network_create.go:284] running [docker network inspect addons-758245] to gather additional debugging logs...
	I1211 23:55:51.578335   16270 cli_runner.go:164] Run: docker network inspect addons-758245
	W1211 23:55:51.593493   16270 cli_runner.go:211] docker network inspect addons-758245 returned with exit code 1
	I1211 23:55:51.593520   16270 network_create.go:287] error running [docker network inspect addons-758245]: docker network inspect addons-758245: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-758245 not found
	I1211 23:55:51.593532   16270 network_create.go:289] output of [docker network inspect addons-758245]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-758245 not found
	
	** /stderr **
	I1211 23:55:51.593656   16270 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 23:55:51.609942   16270 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f1e4c0}
	I1211 23:55:51.609985   16270 network_create.go:124] attempt to create docker network addons-758245 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1211 23:55:51.610025   16270 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-758245 addons-758245
	I1211 23:55:51.935647   16270 network_create.go:108] docker network addons-758245 192.168.49.0/24 created
	I1211 23:55:51.935682   16270 kic.go:121] calculated static IP "192.168.49.2" for the "addons-758245" container
	I1211 23:55:51.935764   16270 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1211 23:55:51.951049   16270 cli_runner.go:164] Run: docker volume create addons-758245 --label name.minikube.sigs.k8s.io=addons-758245 --label created_by.minikube.sigs.k8s.io=true
	I1211 23:55:52.063194   16270 oci.go:103] Successfully created a docker volume addons-758245
	I1211 23:55:52.063292   16270 cli_runner.go:164] Run: docker run --rm --name addons-758245-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-758245 --entrypoint /usr/bin/test -v addons-758245:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1211 23:55:56.177708   16270 cli_runner.go:217] Completed: docker run --rm --name addons-758245-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-758245 --entrypoint /usr/bin/test -v addons-758245:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (4.114369659s)
	I1211 23:55:56.177737   16270 oci.go:107] Successfully prepared a docker volume addons-758245
	I1211 23:55:56.177786   16270 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 23:55:56.177797   16270 kic.go:194] Starting extracting preloaded images to volume ...
	I1211 23:55:56.177853   16270 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-758245:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1211 23:55:59.869845   16270 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-758245:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.691931221s)
	I1211 23:55:59.869882   16270 kic.go:203] duration metric: took 3.692080073s to extract preloaded images to volume ...
	W1211 23:55:59.869977   16270 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1211 23:55:59.870010   16270 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1211 23:55:59.870047   16270 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1211 23:55:59.922785   16270 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-758245 --name addons-758245 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-758245 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-758245 --network addons-758245 --ip 192.168.49.2 --volume addons-758245:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1211 23:56:00.197337   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Running}}
	I1211 23:56:00.215120   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:00.232234   16270 cli_runner.go:164] Run: docker exec addons-758245 stat /var/lib/dpkg/alternatives/iptables
	I1211 23:56:00.279932   16270 oci.go:144] the created container "addons-758245" has a running status.
	I1211 23:56:00.279961   16270 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa...
	I1211 23:56:00.311861   16270 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1211 23:56:00.335200   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:00.357620   16270 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1211 23:56:00.357641   16270 kic_runner.go:114] Args: [docker exec --privileged addons-758245 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1211 23:56:00.400772   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:00.420690   16270 machine.go:94] provisionDockerMachine start ...
	I1211 23:56:00.420801   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:00.440380   16270 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:00.440715   16270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1211 23:56:00.440732   16270 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 23:56:00.441910   16270 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50836->127.0.0.1:32768: read: connection reset by peer
	I1211 23:56:03.570490   16270 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-758245
	
	I1211 23:56:03.570519   16270 ubuntu.go:182] provisioning hostname "addons-758245"
	I1211 23:56:03.570572   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:03.588844   16270 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:03.589043   16270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1211 23:56:03.589055   16270 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-758245 && echo "addons-758245" | sudo tee /etc/hostname
	I1211 23:56:03.724529   16270 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-758245
	
	I1211 23:56:03.724610   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:03.740628   16270 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:03.740868   16270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1211 23:56:03.740886   16270 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-758245' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-758245/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-758245' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 23:56:03.869484   16270 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:56:03.869514   16270 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1211 23:56:03.869535   16270 ubuntu.go:190] setting up certificates
	I1211 23:56:03.869546   16270 provision.go:84] configureAuth start
	I1211 23:56:03.869591   16270 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-758245
	I1211 23:56:03.886354   16270 provision.go:143] copyHostCerts
	I1211 23:56:03.886428   16270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1211 23:56:03.886580   16270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1211 23:56:03.886689   16270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1211 23:56:03.886778   16270 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.addons-758245 san=[127.0.0.1 192.168.49.2 addons-758245 localhost minikube]
	I1211 23:56:04.052619   16270 provision.go:177] copyRemoteCerts
	I1211 23:56:04.052685   16270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 23:56:04.052778   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:04.069348   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:04.163588   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 23:56:04.181238   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1211 23:56:04.196615   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1211 23:56:04.211984   16270 provision.go:87] duration metric: took 342.423002ms to configureAuth
	I1211 23:56:04.212006   16270 ubuntu.go:206] setting minikube options for container-runtime
	I1211 23:56:04.212177   16270 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:04.212272   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:04.230349   16270 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:04.230575   16270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1211 23:56:04.230591   16270 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 23:56:04.493501   16270 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 23:56:04.493525   16270 machine.go:97] duration metric: took 4.072809102s to provisionDockerMachine
	I1211 23:56:04.493537   16270 client.go:176] duration metric: took 13.055363495s to LocalClient.Create
	I1211 23:56:04.493555   16270 start.go:167] duration metric: took 13.055418836s to libmachine.API.Create "addons-758245"
	I1211 23:56:04.493563   16270 start.go:293] postStartSetup for "addons-758245" (driver="docker")
	I1211 23:56:04.493571   16270 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 23:56:04.493627   16270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 23:56:04.493664   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:04.510243   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:04.604093   16270 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 23:56:04.607278   16270 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 23:56:04.607306   16270 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1211 23:56:04.607318   16270 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1211 23:56:04.607381   16270 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1211 23:56:04.607407   16270 start.go:296] duration metric: took 113.839582ms for postStartSetup
	I1211 23:56:04.607694   16270 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-758245
	I1211 23:56:04.623853   16270 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/config.json ...
	I1211 23:56:04.624074   16270 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 23:56:04.624115   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:04.639895   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:04.730773   16270 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 23:56:04.735014   16270 start.go:128] duration metric: took 13.355600447s to createHost
	I1211 23:56:04.735048   16270 start.go:83] releasing machines lock for "addons-758245", held for 13.355746998s
	I1211 23:56:04.735126   16270 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-758245
	I1211 23:56:04.751266   16270 ssh_runner.go:195] Run: cat /version.json
	I1211 23:56:04.751307   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:04.751352   16270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 23:56:04.751434   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:04.766533   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:04.768318   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:04.916148   16270 ssh_runner.go:195] Run: systemctl --version
	I1211 23:56:04.921844   16270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 23:56:04.952177   16270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 23:56:04.956147   16270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 23:56:04.956208   16270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 23:56:04.979261   16270 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1211 23:56:04.979278   16270 start.go:496] detecting cgroup driver to use...
	I1211 23:56:04.979326   16270 detect.go:190] detected "systemd" cgroup driver on host os
	I1211 23:56:04.979365   16270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 23:56:04.993119   16270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 23:56:05.003591   16270 docker.go:218] disabling cri-docker service (if available) ...
	I1211 23:56:05.003637   16270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 23:56:05.018234   16270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 23:56:05.033510   16270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 23:56:05.108197   16270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 23:56:05.195829   16270 docker.go:234] disabling docker service ...
	I1211 23:56:05.195895   16270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 23:56:05.212096   16270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 23:56:05.223355   16270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 23:56:05.301517   16270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 23:56:05.376607   16270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 23:56:05.387324   16270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 23:56:05.399723   16270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 23:56:05.399769   16270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:05.408865   16270 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1211 23:56:05.408905   16270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:05.416435   16270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:05.424112   16270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:05.431644   16270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 23:56:05.438612   16270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:05.446039   16270 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:05.457658   16270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:05.465182   16270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 23:56:05.471401   16270 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1211 23:56:05.471452   16270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1211 23:56:05.481990   16270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 23:56:05.489030   16270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:05.564253   16270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 23:56:05.686517   16270 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 23:56:05.686590   16270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 23:56:05.690214   16270 start.go:564] Will wait 60s for crictl version
	I1211 23:56:05.690254   16270 ssh_runner.go:195] Run: which crictl
	I1211 23:56:05.693414   16270 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1211 23:56:05.716545   16270 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1211 23:56:05.716660   16270 ssh_runner.go:195] Run: crio --version
	I1211 23:56:05.742877   16270 ssh_runner.go:195] Run: crio --version
	I1211 23:56:05.769594   16270 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1211 23:56:05.770586   16270 cli_runner.go:164] Run: docker network inspect addons-758245 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 23:56:05.786862   16270 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1211 23:56:05.790397   16270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:56:05.799947   16270 kubeadm.go:884] updating cluster {Name:addons-758245 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-758245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 23:56:05.800055   16270 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 23:56:05.800094   16270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:56:05.826899   16270 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:56:05.826915   16270 crio.go:433] Images already preloaded, skipping extraction
	I1211 23:56:05.826950   16270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:56:05.848845   16270 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:56:05.848861   16270 cache_images.go:86] Images are preloaded, skipping loading
	I1211 23:56:05.848868   16270 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1211 23:56:05.849006   16270 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-758245 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-758245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 23:56:05.849064   16270 ssh_runner.go:195] Run: crio config
	I1211 23:56:05.891990   16270 cni.go:84] Creating CNI manager for ""
	I1211 23:56:05.892010   16270 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 23:56:05.892033   16270 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 23:56:05.892053   16270 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-758245 NodeName:addons-758245 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 23:56:05.892167   16270 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-758245"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 23:56:05.892218   16270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1211 23:56:05.899593   16270 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 23:56:05.899638   16270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 23:56:05.906727   16270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1211 23:56:05.917905   16270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 23:56:05.931368   16270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1211 23:56:05.942457   16270 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1211 23:56:05.945616   16270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:56:05.954258   16270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:06.030403   16270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:56:06.053319   16270 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245 for IP: 192.168.49.2
	I1211 23:56:06.053340   16270 certs.go:195] generating shared ca certs ...
	I1211 23:56:06.053364   16270 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.053507   16270 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1211 23:56:06.083425   16270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt ...
	I1211 23:56:06.083449   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt: {Name:mk465ac978f45c8cfec04be7ca3a8224a830e496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.083602   16270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key ...
	I1211 23:56:06.083614   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key: {Name:mkde3b032e8b8e64d138e20664de71f2523a9c97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.083694   16270 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1211 23:56:06.149272   16270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt ...
	I1211 23:56:06.149298   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt: {Name:mk6aca7ffc58aefc112266ca28b54175cf3a3bf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.149446   16270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key ...
	I1211 23:56:06.149457   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key: {Name:mk9d3ef6dd311fb34aa9dc3f2cd7c88b3c3156ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.149547   16270 certs.go:257] generating profile certs ...
	I1211 23:56:06.149602   16270 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.key
	I1211 23:56:06.149616   16270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt with IP's: []
	I1211 23:56:06.420277   16270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt ...
	I1211 23:56:06.420302   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: {Name:mk30b60689901d46019a4a857b7f031d47f1e73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.420454   16270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.key ...
	I1211 23:56:06.420467   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.key: {Name:mk92f82f3c78de3308df9c171a137a9a32324738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.420547   16270 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.key.d1b5940d
	I1211 23:56:06.420564   16270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.crt.d1b5940d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1211 23:56:06.615486   16270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.crt.d1b5940d ...
	I1211 23:56:06.615510   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.crt.d1b5940d: {Name:mkd104ac8ab443ea34a18dcfb351fb6bf3464aac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.615657   16270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.key.d1b5940d ...
	I1211 23:56:06.615671   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.key.d1b5940d: {Name:mk49326e8f13fe40a6ed83bdb73c63533e26df32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.615740   16270 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.crt.d1b5940d -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.crt
	I1211 23:56:06.615832   16270 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.key.d1b5940d -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.key
	I1211 23:56:06.615888   16270 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/proxy-client.key
	I1211 23:56:06.615906   16270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/proxy-client.crt with IP's: []
	I1211 23:56:06.676044   16270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/proxy-client.crt ...
	I1211 23:56:06.676067   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/proxy-client.crt: {Name:mk12a7644f5e15d4ceb9087be93e635edfc6e5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.676190   16270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/proxy-client.key ...
	I1211 23:56:06.676200   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/proxy-client.key: {Name:mkcac3f890d4ff741f9972cef613e060b3064243 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.676363   16270 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1211 23:56:06.676396   16270 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1211 23:56:06.676423   16270 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1211 23:56:06.676447   16270 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1211 23:56:06.677044   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 23:56:06.693713   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1211 23:56:06.709176   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 23:56:06.724589   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1211 23:56:06.740139   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1211 23:56:06.755657   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 23:56:06.770951   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 23:56:06.786361   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 23:56:06.801712   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 23:56:06.819090   16270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 23:56:06.831246   16270 ssh_runner.go:195] Run: openssl version
	I1211 23:56:06.837012   16270 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:06.843797   16270 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 23:56:06.852462   16270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:06.855745   16270 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:06.855789   16270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:06.888760   16270 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 23:56:06.895224   16270 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1211 23:56:06.901651   16270 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 23:56:06.904709   16270 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1211 23:56:06.904762   16270 kubeadm.go:401] StartCluster: {Name:addons-758245 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-758245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:56:06.904825   16270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:56:06.904861   16270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:56:06.928866   16270 cri.go:89] found id: ""
	I1211 23:56:06.928916   16270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 23:56:06.935766   16270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 23:56:06.942527   16270 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1211 23:56:06.942562   16270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 23:56:06.949356   16270 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 23:56:06.949388   16270 kubeadm.go:158] found existing configuration files:
	
	I1211 23:56:06.949425   16270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 23:56:06.955963   16270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 23:56:06.956005   16270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 23:56:06.962435   16270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 23:56:06.969001   16270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 23:56:06.969046   16270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 23:56:06.975289   16270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 23:56:06.981925   16270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 23:56:06.981966   16270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 23:56:06.988464   16270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 23:56:06.995095   16270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 23:56:06.995141   16270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 23:56:07.001426   16270 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 23:56:07.033382   16270 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1211 23:56:07.033427   16270 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 23:56:07.051724   16270 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1211 23:56:07.051799   16270 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1211 23:56:07.051866   16270 kubeadm.go:319] OS: Linux
	I1211 23:56:07.051933   16270 kubeadm.go:319] CGROUPS_CPU: enabled
	I1211 23:56:07.052020   16270 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1211 23:56:07.052076   16270 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1211 23:56:07.052121   16270 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1211 23:56:07.052162   16270 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1211 23:56:07.052215   16270 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1211 23:56:07.052259   16270 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1211 23:56:07.052339   16270 kubeadm.go:319] CGROUPS_IO: enabled
	I1211 23:56:07.102744   16270 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 23:56:07.102931   16270 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 23:56:07.103094   16270 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 23:56:07.110373   16270 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 23:56:07.112959   16270 out.go:252]   - Generating certificates and keys ...
	I1211 23:56:07.113058   16270 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 23:56:07.113162   16270 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 23:56:07.465096   16270 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1211 23:56:07.609391   16270 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1211 23:56:07.712529   16270 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1211 23:56:07.940465   16270 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1211 23:56:08.082841   16270 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1211 23:56:08.082969   16270 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-758245 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1211 23:56:08.357074   16270 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1211 23:56:08.357214   16270 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-758245 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1211 23:56:08.765075   16270 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1211 23:56:09.002830   16270 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1211 23:56:09.587636   16270 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1211 23:56:09.587729   16270 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 23:56:09.737337   16270 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 23:56:10.244824   16270 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 23:56:10.444566   16270 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 23:56:10.705252   16270 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 23:56:10.888981   16270 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 23:56:10.889430   16270 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 23:56:10.892790   16270 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 23:56:10.894348   16270 out.go:252]   - Booting up control plane ...
	I1211 23:56:10.894504   16270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 23:56:10.894607   16270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 23:56:10.895023   16270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 23:56:10.908943   16270 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 23:56:10.909100   16270 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 23:56:10.914970   16270 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 23:56:10.915260   16270 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 23:56:10.915330   16270 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 23:56:11.003563   16270 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 23:56:11.003712   16270 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 23:56:11.504843   16270 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.433069ms
	I1211 23:56:11.507574   16270 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1211 23:56:11.507723   16270 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1211 23:56:11.507848   16270 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1211 23:56:11.507943   16270 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1211 23:56:13.253333   16270 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.745637025s
	I1211 23:56:13.466393   16270 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.958803186s
	I1211 23:56:15.009372   16270 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501787403s
	I1211 23:56:15.024351   16270 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 23:56:15.032361   16270 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 23:56:15.039017   16270 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 23:56:15.039272   16270 kubeadm.go:319] [mark-control-plane] Marking the node addons-758245 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 23:56:15.046806   16270 kubeadm.go:319] [bootstrap-token] Using token: dtmtwr.z33wxy23dm2jhz5k
	I1211 23:56:15.047966   16270 out.go:252]   - Configuring RBAC rules ...
	I1211 23:56:15.048060   16270 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 23:56:15.050520   16270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 23:56:15.055728   16270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 23:56:15.057677   16270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 23:56:15.059576   16270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 23:56:15.062133   16270 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 23:56:15.414736   16270 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 23:56:15.827303   16270 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1211 23:56:16.414038   16270 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1211 23:56:16.414848   16270 kubeadm.go:319] 
	I1211 23:56:16.414911   16270 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1211 23:56:16.414941   16270 kubeadm.go:319] 
	I1211 23:56:16.415046   16270 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1211 23:56:16.415056   16270 kubeadm.go:319] 
	I1211 23:56:16.415092   16270 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1211 23:56:16.415154   16270 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 23:56:16.415232   16270 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 23:56:16.415244   16270 kubeadm.go:319] 
	I1211 23:56:16.415325   16270 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1211 23:56:16.415332   16270 kubeadm.go:319] 
	I1211 23:56:16.415392   16270 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 23:56:16.415415   16270 kubeadm.go:319] 
	I1211 23:56:16.415504   16270 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1211 23:56:16.415624   16270 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 23:56:16.415713   16270 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 23:56:16.415735   16270 kubeadm.go:319] 
	I1211 23:56:16.415868   16270 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 23:56:16.415971   16270 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1211 23:56:16.415983   16270 kubeadm.go:319] 
	I1211 23:56:16.416122   16270 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token dtmtwr.z33wxy23dm2jhz5k \
	I1211 23:56:16.416281   16270 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f \
	I1211 23:56:16.416329   16270 kubeadm.go:319] 	--control-plane 
	I1211 23:56:16.416346   16270 kubeadm.go:319] 
	I1211 23:56:16.416438   16270 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1211 23:56:16.416445   16270 kubeadm.go:319] 
	I1211 23:56:16.416577   16270 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token dtmtwr.z33wxy23dm2jhz5k \
	I1211 23:56:16.416731   16270 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f 
	I1211 23:56:16.418635   16270 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1211 23:56:16.418756   16270 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 23:56:16.418791   16270 cni.go:84] Creating CNI manager for ""
	I1211 23:56:16.418804   16270 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 23:56:16.420846   16270 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1211 23:56:16.421821   16270 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1211 23:56:16.425740   16270 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1211 23:56:16.425757   16270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1211 23:56:16.438021   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1211 23:56:16.623327   16270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 23:56:16.623461   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:16.623485   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-758245 minikube.k8s.io/updated_at=2025_12_11T23_56_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=addons-758245 minikube.k8s.io/primary=true
	I1211 23:56:16.633910   16270 ops.go:34] apiserver oom_adj: -16
	I1211 23:56:16.697701   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:17.199228   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:17.698225   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:18.198513   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:18.698748   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:19.198591   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:19.697783   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:20.197996   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:20.698762   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:21.198747   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:21.698659   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:21.756636   16270 kubeadm.go:1114] duration metric: took 5.133244409s to wait for elevateKubeSystemPrivileges
	I1211 23:56:21.756673   16270 kubeadm.go:403] duration metric: took 14.85191434s to StartCluster
	I1211 23:56:21.756693   16270 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:21.756804   16270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1211 23:56:21.757254   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:21.757427   16270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1211 23:56:21.757462   16270 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:56:21.757527   16270 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1211 23:56:21.757661   16270 addons.go:70] Setting default-storageclass=true in profile "addons-758245"
	I1211 23:56:21.757668   16270 addons.go:70] Setting yakd=true in profile "addons-758245"
	I1211 23:56:21.757683   16270 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:21.757691   16270 addons.go:239] Setting addon yakd=true in "addons-758245"
	I1211 23:56:21.757698   16270 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-758245"
	I1211 23:56:21.757698   16270 addons.go:70] Setting cloud-spanner=true in profile "addons-758245"
	I1211 23:56:21.757725   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.757710   16270 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-758245"
	I1211 23:56:21.757730   16270 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-758245"
	I1211 23:56:21.757748   16270 addons.go:239] Setting addon cloud-spanner=true in "addons-758245"
	I1211 23:56:21.757757   16270 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-758245"
	I1211 23:56:21.757764   16270 addons.go:70] Setting storage-provisioner=true in profile "addons-758245"
	I1211 23:56:21.757786   16270 addons.go:239] Setting addon storage-provisioner=true in "addons-758245"
	I1211 23:56:21.757794   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.757808   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.757809   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.757902   16270 addons.go:70] Setting volcano=true in profile "addons-758245"
	I1211 23:56:21.757922   16270 addons.go:239] Setting addon volcano=true in "addons-758245"
	I1211 23:56:21.757946   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.758053   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.758220   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.758231   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.758262   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.758299   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.758394   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.758410   16270 addons.go:70] Setting volumesnapshots=true in profile "addons-758245"
	I1211 23:56:21.758425   16270 addons.go:239] Setting addon volumesnapshots=true in "addons-758245"
	I1211 23:56:21.758447   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.758884   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.758992   16270 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-758245"
	I1211 23:56:21.759040   16270 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-758245"
	I1211 23:56:21.759066   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.759299   16270 addons.go:70] Setting registry=true in profile "addons-758245"
	I1211 23:56:21.759331   16270 addons.go:239] Setting addon registry=true in "addons-758245"
	I1211 23:56:21.759357   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.759903   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.759421   16270 addons.go:70] Setting registry-creds=true in profile "addons-758245"
	I1211 23:56:21.760463   16270 addons.go:239] Setting addon registry-creds=true in "addons-758245"
	I1211 23:56:21.760524   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.758393   16270 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-758245"
	I1211 23:56:21.760134   16270 addons.go:70] Setting gcp-auth=true in profile "addons-758245"
	I1211 23:56:21.760600   16270 mustload.go:66] Loading cluster: addons-758245
	I1211 23:56:21.760812   16270 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-758245"
	I1211 23:56:21.760175   16270 addons.go:70] Setting ingress=true in profile "addons-758245"
	I1211 23:56:21.760971   16270 addons.go:239] Setting addon ingress=true in "addons-758245"
	I1211 23:56:21.761019   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.761050   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.760185   16270 addons.go:70] Setting ingress-dns=true in profile "addons-758245"
	I1211 23:56:21.761383   16270 addons.go:239] Setting addon ingress-dns=true in "addons-758245"
	I1211 23:56:21.761428   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.760194   16270 addons.go:70] Setting inspektor-gadget=true in profile "addons-758245"
	I1211 23:56:21.761591   16270 addons.go:239] Setting addon inspektor-gadget=true in "addons-758245"
	I1211 23:56:21.761620   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.760825   16270 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:21.761924   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.761967   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.762109   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.764595   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.760204   16270 addons.go:70] Setting metrics-server=true in profile "addons-758245"
	I1211 23:56:21.757749   16270 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-758245"
	I1211 23:56:21.764640   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.764692   16270 out.go:179] * Verifying Kubernetes components...
	I1211 23:56:21.765227   16270 addons.go:239] Setting addon metrics-server=true in "addons-758245"
	I1211 23:56:21.765267   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.766316   16270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:21.770052   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.770278   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.770938   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.771271   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.800610   16270 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1211 23:56:21.801890   16270 out.go:179]   - Using image docker.io/registry:3.0.0
	I1211 23:56:21.803014   16270 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1211 23:56:21.803031   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1211 23:56:21.803094   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.811642   16270 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1211 23:56:21.813286   16270 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1211 23:56:21.813307   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1211 23:56:21.813387   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.832530   16270 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1211 23:56:21.833919   16270 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1211 23:56:21.833943   16270 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1211 23:56:21.834024   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	W1211 23:56:21.834162   16270 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1211 23:56:21.836090   16270 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 23:56:21.838176   16270 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1211 23:56:21.838370   16270 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:56:21.838406   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 23:56:21.838492   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.839293   16270 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1211 23:56:21.839308   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1211 23:56:21.839358   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.849327   16270 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-758245"
	I1211 23:56:21.849384   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.849527   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.849884   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.851012   16270 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1211 23:56:21.852826   16270 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:56:21.852845   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1211 23:56:21.852899   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.854555   16270 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1211 23:56:21.855639   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1211 23:56:21.856847   16270 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:56:21.856862   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1211 23:56:21.856912   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.856990   16270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1211 23:56:21.856999   16270 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1211 23:56:21.857038   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.865494   16270 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1211 23:56:21.865514   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1211 23:56:21.866868   16270 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1211 23:56:21.866886   16270 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1211 23:56:21.866940   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.867328   16270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:21.867561   16270 addons.go:239] Setting addon default-storageclass=true in "addons-758245"
	I1211 23:56:21.868124   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.868403   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1211 23:56:21.868640   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.872529   16270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:21.872642   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1211 23:56:21.873900   16270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1211 23:56:21.875072   16270 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:56:21.875090   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1211 23:56:21.875146   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.875293   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1211 23:56:21.878205   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1211 23:56:21.879443   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1211 23:56:21.880629   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1211 23:56:21.884439   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1211 23:56:21.884777   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.884936   16270 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1211 23:56:21.887576   16270 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:56:21.887595   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1211 23:56:21.887630   16270 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1211 23:56:21.887648   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.888386   16270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1211 23:56:21.888401   16270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1211 23:56:21.888456   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.888799   16270 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:56:21.888812   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1211 23:56:21.888877   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.904470   16270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1211 23:56:21.911425   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.921582   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.925110   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.927185   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.941915   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.944410   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.949552   16270 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 23:56:21.949739   16270 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 23:56:21.950290   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.951097   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.956947   16270 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1211 23:56:21.958523   16270 out.go:179]   - Using image docker.io/busybox:stable
	I1211 23:56:21.958679   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.959943   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.959961   16270 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:56:21.959975   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1211 23:56:21.960028   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.961449   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.962564   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.982948   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.988369   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.993461   16270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:56:21.997936   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:22.068794   16270 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1211 23:56:22.068818   16270 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1211 23:56:22.088074   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:56:22.088681   16270 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:56:22.088697   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1211 23:56:22.095442   16270 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1211 23:56:22.095460   16270 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1211 23:56:22.102110   16270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1211 23:56:22.102132   16270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1211 23:56:22.107181   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:56:22.114271   16270 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1211 23:56:22.114301   16270 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1211 23:56:22.121243   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1211 23:56:22.135083   16270 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1211 23:56:22.135112   16270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1211 23:56:22.136292   16270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1211 23:56:22.136315   16270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1211 23:56:22.145905   16270 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1211 23:56:22.145923   16270 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1211 23:56:22.146889   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1211 23:56:22.153994   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:56:22.155984   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:56:22.157068   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:56:22.157769   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:56:22.161088   16270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1211 23:56:22.161113   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1211 23:56:22.163041   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:56:22.169974   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 23:56:22.176165   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:56:22.198968   16270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1211 23:56:22.199072   16270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1211 23:56:22.207187   16270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1211 23:56:22.207208   16270 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1211 23:56:22.207344   16270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1211 23:56:22.207355   16270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1211 23:56:22.212168   16270 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:56:22.212186   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1211 23:56:22.255468   16270 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1211 23:56:22.255510   16270 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1211 23:56:22.267654   16270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:56:22.267675   16270 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1211 23:56:22.278237   16270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1211 23:56:22.278548   16270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1211 23:56:22.278878   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:56:22.307630   16270 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:22.307652   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1211 23:56:22.320784   16270 node_ready.go:35] waiting up to 6m0s for node "addons-758245" to be "Ready" ...
	I1211 23:56:22.321059   16270 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1211 23:56:22.338647   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:56:22.350839   16270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1211 23:56:22.350866   16270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1211 23:56:22.358465   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:22.396657   16270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1211 23:56:22.396691   16270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1211 23:56:22.454085   16270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1211 23:56:22.454109   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1211 23:56:22.512196   16270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1211 23:56:22.512226   16270 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1211 23:56:22.566912   16270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1211 23:56:22.566943   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1211 23:56:22.624831   16270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1211 23:56:22.624860   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1211 23:56:22.653977   16270 addons.go:495] Verifying addon registry=true in "addons-758245"
	I1211 23:56:22.656215   16270 out.go:179] * Verifying registry addon...
	I1211 23:56:22.658132   16270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1211 23:56:22.667888   16270 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1211 23:56:22.667922   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:22.676364   16270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:56:22.676385   16270 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1211 23:56:22.720418   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:56:22.826375   16270 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-758245" context rescaled to 1 replicas
	I1211 23:56:23.161982   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:23.276559   16270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.120541763s)
	I1211 23:56:23.276599   16270 addons.go:495] Verifying addon ingress=true in "addons-758245"
	I1211 23:56:23.276670   16270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.118873305s)
	I1211 23:56:23.276737   16270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.119644481s)
	I1211 23:56:23.276808   16270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.113747439s)
	I1211 23:56:23.276862   16270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.106862228s)
	I1211 23:56:23.276957   16270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.100719304s)
	I1211 23:56:23.277104   16270 addons.go:495] Verifying addon metrics-server=true in "addons-758245"
	I1211 23:56:23.278005   16270 out.go:179] * Verifying ingress addon...
	I1211 23:56:23.278653   16270 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-758245 service yakd-dashboard -n yakd-dashboard
	
	I1211 23:56:23.280712   16270 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1211 23:56:23.282978   16270 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	W1211 23:56:23.283065   16270 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1211 23:56:23.661313   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:23.672904   16270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.314380478s)
	W1211 23:56:23.672957   16270 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:56:23.672986   16270 retry.go:31] will retry after 231.048662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:56:23.673098   16270 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-758245"
	I1211 23:56:23.674636   16270 out.go:179] * Verifying csi-hostpath-driver addon...
	I1211 23:56:23.676627   16270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1211 23:56:23.679757   16270 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1211 23:56:23.679782   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:23.784138   16270 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1211 23:56:23.784158   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:23.904155   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:24.160751   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:24.179356   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:24.283383   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:24.322929   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:24.661012   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:24.678689   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:24.783628   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:25.160870   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:25.178725   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:25.283430   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:25.661127   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:25.679175   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:25.783353   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:26.160345   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:26.178759   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:26.283688   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:26.323739   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:26.346718   16270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.442522654s)
	I1211 23:56:26.660458   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:26.679188   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:26.783212   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:27.160610   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:27.179274   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:27.283259   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:27.660441   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:27.678987   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:27.783316   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:28.160981   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:28.178688   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:28.283708   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:28.323876   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:28.660822   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:28.678410   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:28.783602   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:29.160853   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:29.178549   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:29.283525   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:29.459546   16270 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1211 23:56:29.459621   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:29.476826   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:29.575268   16270 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1211 23:56:29.586459   16270 addons.go:239] Setting addon gcp-auth=true in "addons-758245"
	I1211 23:56:29.586509   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:29.586820   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:29.603327   16270 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1211 23:56:29.603376   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:29.618533   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:29.661697   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:29.678517   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:29.709769   16270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:29.710920   16270 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1211 23:56:29.711946   16270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1211 23:56:29.711958   16270 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1211 23:56:29.724074   16270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1211 23:56:29.724088   16270 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1211 23:56:29.735650   16270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:56:29.735669   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1211 23:56:29.747529   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:56:29.783844   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:30.033393   16270 addons.go:495] Verifying addon gcp-auth=true in "addons-758245"
	I1211 23:56:30.034712   16270 out.go:179] * Verifying gcp-auth addon...
	I1211 23:56:30.036444   16270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1211 23:56:30.038506   16270 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1211 23:56:30.038533   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:30.160705   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:30.179540   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:30.283610   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:30.538547   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:30.668753   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:30.679159   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:30.783262   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:30.823026   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:31.039589   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:31.160796   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:31.178619   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:31.283823   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:31.538941   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:31.661044   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:31.678980   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:31.782990   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:32.039267   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:32.160268   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:32.179033   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:32.283138   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:32.539110   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:32.661330   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:32.679028   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:32.783089   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:33.039330   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:33.160348   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:33.179164   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:33.283384   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:33.323054   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:33.539377   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:33.660679   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:33.679385   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:33.783655   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:34.038647   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:34.160636   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:34.179318   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:34.283330   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:34.539557   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:34.660816   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:34.678516   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:34.783354   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:35.039577   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:35.160932   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:35.178765   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:35.282798   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:35.323454   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:35.538659   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:35.661096   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:35.679008   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:35.783393   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:36.039926   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:36.161108   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:36.178845   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:36.283182   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:36.539224   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:36.660440   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:36.679190   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:36.783264   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:37.039682   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:37.160690   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:37.179441   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:37.283874   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:37.323606   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:37.539089   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:37.661221   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:37.679081   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:37.783280   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:38.039456   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:38.160702   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:38.179398   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:38.283546   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:38.539428   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:38.660700   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:38.678357   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:38.783454   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:39.039793   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:39.160791   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:39.178637   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:39.283909   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:39.539015   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:39.661153   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:39.679045   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:39.783187   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:39.822935   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:40.039253   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:40.160514   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:40.179389   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:40.283388   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:40.539562   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:40.660787   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:40.678611   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:40.783625   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:41.039819   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:41.161336   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:41.179192   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:41.283290   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:41.539858   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:41.661163   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:41.679142   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:41.783468   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:41.823160   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:42.039666   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:42.160796   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:42.178644   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:42.283686   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:42.538826   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:42.661023   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:42.678777   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:42.782777   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:43.038877   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:43.161056   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:43.178935   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:43.283127   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:43.539110   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:43.661450   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:43.679377   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:43.783701   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:43.823671   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:44.039191   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:44.161210   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:44.178983   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:44.283126   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:44.539235   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:44.660422   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:44.679182   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:44.783248   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:45.039174   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:45.160186   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:45.178994   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:45.283018   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:45.538929   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:45.660970   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:45.678870   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:45.783029   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:46.039364   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:46.160462   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:46.179263   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:46.283346   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:46.323238   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:46.540065   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:46.661211   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:46.678941   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:46.782973   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:47.039295   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:47.160512   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:47.179300   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:47.283789   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:47.539103   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:47.661308   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:47.679227   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:47.783612   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:48.038561   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:48.160756   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:48.178427   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:48.283509   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:48.539560   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:48.660584   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:48.679449   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:48.783375   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:48.823077   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:49.039362   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:49.160293   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:49.179047   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:49.283310   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:49.539616   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:49.660652   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:49.679720   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:49.783916   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:50.038922   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:50.161000   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:50.178861   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:50.283193   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:50.539101   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:50.661352   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:50.679291   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:50.783307   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:51.039392   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:51.160384   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:51.179128   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:51.283051   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:51.322591   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:51.538828   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:51.661123   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:51.678983   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:51.783331   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:52.039739   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:52.160762   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:52.178451   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:52.283650   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:52.539199   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:52.660400   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:52.679379   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:52.783515   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:53.039508   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:53.160814   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:53.178691   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:53.283785   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:53.323732   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:53.538956   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:53.660975   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:53.678851   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:53.783161   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:54.039253   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:54.160169   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:54.178939   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:54.283152   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:54.539304   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:54.660374   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:54.679266   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:54.783406   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:55.039221   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:55.160277   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:55.179159   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:55.283422   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:55.539390   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:55.660639   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:55.679709   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:55.783915   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:55.822651   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:56.039125   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:56.161231   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:56.179030   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:56.283110   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:56.539093   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:56.661195   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:56.680068   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:56.783201   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:57.039057   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:57.161074   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:57.178883   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:57.283348   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:57.539688   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:57.660894   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:57.678826   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:57.782862   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:57.823919   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:58.039297   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:58.160264   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:58.179096   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:58.283198   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:58.538889   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:58.660936   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:58.678775   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:58.782690   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:59.038592   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:59.160469   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:59.179245   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:59.283372   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:59.539427   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:59.660698   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:59.678452   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:59.783578   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:00.038663   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:00.160722   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:00.179469   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:00.283642   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:57:00.323311   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:57:00.539542   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:00.660544   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:00.679313   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:00.783374   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:01.039364   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:01.160543   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:01.179401   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:01.283533   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:01.538684   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:01.660758   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:01.678838   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:01.782886   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:02.039186   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:02.160397   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:02.179426   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:02.283932   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:02.539074   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:02.661355   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:02.679734   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:02.782707   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:02.823466   16270 node_ready.go:49] node "addons-758245" is "Ready"
	I1211 23:57:02.823515   16270 node_ready.go:38] duration metric: took 40.502700107s for node "addons-758245" to be "Ready" ...
	I1211 23:57:02.823534   16270 api_server.go:52] waiting for apiserver process to appear ...
	I1211 23:57:02.823594   16270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 23:57:02.839929   16270 api_server.go:72] duration metric: took 41.082419196s to wait for apiserver process to appear ...
	I1211 23:57:02.839954   16270 api_server.go:88] waiting for apiserver healthz status ...
	I1211 23:57:02.839978   16270 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1211 23:57:02.845343   16270 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1211 23:57:02.846396   16270 api_server.go:141] control plane version: v1.34.2
	I1211 23:57:02.846424   16270 api_server.go:131] duration metric: took 6.461954ms to wait for apiserver health ...
	I1211 23:57:02.846436   16270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1211 23:57:02.855158   16270 system_pods.go:59] 20 kube-system pods found
	I1211 23:57:02.855196   16270 system_pods.go:61] "amd-gpu-device-plugin-t4nwx" [c2c64d04-3068-46a6-9981-66ab6ada3e01] Pending
	I1211 23:57:02.855208   16270 system_pods.go:61] "coredns-66bc5c9577-xxwm5" [de02078f-e23e-4ca9-91e5-0f424f00cee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1211 23:57:02.855217   16270 system_pods.go:61] "csi-hostpath-attacher-0" [c7bae797-cf53-4b9c-97c3-0b0e891df4b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:57:02.855229   16270 system_pods.go:61] "csi-hostpath-resizer-0" [718b3422-383a-4f39-a77c-14135e707c4e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1211 23:57:02.855235   16270 system_pods.go:61] "csi-hostpathplugin-5nn2t" [dd0d3f24-7210-49df-84b7-95e27387db16] Pending
	I1211 23:57:02.855240   16270 system_pods.go:61] "etcd-addons-758245" [35d2e8f3-6273-42f6-b0b6-69593941932e] Running
	I1211 23:57:02.855244   16270 system_pods.go:61] "kindnet-vctlp" [e3d24177-0c80-4919-b891-7f26f355f0f1] Running
	I1211 23:57:02.855251   16270 system_pods.go:61] "kube-apiserver-addons-758245" [24efbe93-266c-488f-9474-c8ebd3a90385] Running
	I1211 23:57:02.855256   16270 system_pods.go:61] "kube-controller-manager-addons-758245" [0eb5a6aa-9ac9-4b60-a5e5-58078366cd04] Running
	I1211 23:57:02.855264   16270 system_pods.go:61] "kube-ingress-dns-minikube" [ceaf163d-b482-416b-99af-e0419ec5a9a5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:57:02.855269   16270 system_pods.go:61] "kube-proxy-2ldz5" [aa397dad-1532-47a5-9c67-d54a6c33f5c9] Running
	I1211 23:57:02.855274   16270 system_pods.go:61] "kube-scheduler-addons-758245" [586dd9ce-332f-4ace-a65f-ef7bf74d1e0b] Running
	I1211 23:57:02.855281   16270 system_pods.go:61] "metrics-server-85b7d694d7-lzpx2" [b6634516-aafd-439b-aa29-2feb3678dca5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:57:02.855290   16270 system_pods.go:61] "nvidia-device-plugin-daemonset-5r9hw" [cf07904f-5cff-48d7-a33a-06925d2e0b55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:57:02.855305   16270 system_pods.go:61] "registry-6b586f9694-ctpbw" [8f86a5b8-2ea9-4839-9462-cfe7303189b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:57:02.855313   16270 system_pods.go:61] "registry-creds-764b6fb674-rkslf" [d7a05cdd-171e-4999-9c92-9696f6943f0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:57:02.855321   16270 system_pods.go:61] "registry-proxy-9n7hj" [1f9bf5f3-1436-4f37-97c7-503e1b285750] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:57:02.855328   16270 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4vpl2" [4fa25eb3-c777-444e-8df2-1f82dd6b89b4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:02.855337   16270 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tgsc4" [c54cb689-98d5-4b4c-aef9-5a82bff58c3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:02.855346   16270 system_pods.go:61] "storage-provisioner" [fbe17644-ac82-4cd1-ba81-92edc1aa060f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1211 23:57:02.855353   16270 system_pods.go:74] duration metric: took 8.910426ms to wait for pod list to return data ...
	I1211 23:57:02.855362   16270 default_sa.go:34] waiting for default service account to be created ...
	I1211 23:57:02.863741   16270 default_sa.go:45] found service account: "default"
	I1211 23:57:02.863767   16270 default_sa.go:55] duration metric: took 8.398261ms for default service account to be created ...
	I1211 23:57:02.863777   16270 system_pods.go:116] waiting for k8s-apps to be running ...
	I1211 23:57:02.869797   16270 system_pods.go:86] 20 kube-system pods found
	I1211 23:57:02.869832   16270 system_pods.go:89] "amd-gpu-device-plugin-t4nwx" [c2c64d04-3068-46a6-9981-66ab6ada3e01] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:57:02.869842   16270 system_pods.go:89] "coredns-66bc5c9577-xxwm5" [de02078f-e23e-4ca9-91e5-0f424f00cee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1211 23:57:02.869852   16270 system_pods.go:89] "csi-hostpath-attacher-0" [c7bae797-cf53-4b9c-97c3-0b0e891df4b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:57:02.869861   16270 system_pods.go:89] "csi-hostpath-resizer-0" [718b3422-383a-4f39-a77c-14135e707c4e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1211 23:57:02.869876   16270 system_pods.go:89] "csi-hostpathplugin-5nn2t" [dd0d3f24-7210-49df-84b7-95e27387db16] Pending
	I1211 23:57:02.869882   16270 system_pods.go:89] "etcd-addons-758245" [35d2e8f3-6273-42f6-b0b6-69593941932e] Running
	I1211 23:57:02.869894   16270 system_pods.go:89] "kindnet-vctlp" [e3d24177-0c80-4919-b891-7f26f355f0f1] Running
	I1211 23:57:02.869900   16270 system_pods.go:89] "kube-apiserver-addons-758245" [24efbe93-266c-488f-9474-c8ebd3a90385] Running
	I1211 23:57:02.869912   16270 system_pods.go:89] "kube-controller-manager-addons-758245" [0eb5a6aa-9ac9-4b60-a5e5-58078366cd04] Running
	I1211 23:57:02.869920   16270 system_pods.go:89] "kube-ingress-dns-minikube" [ceaf163d-b482-416b-99af-e0419ec5a9a5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:57:02.869933   16270 system_pods.go:89] "kube-proxy-2ldz5" [aa397dad-1532-47a5-9c67-d54a6c33f5c9] Running
	I1211 23:57:02.869939   16270 system_pods.go:89] "kube-scheduler-addons-758245" [586dd9ce-332f-4ace-a65f-ef7bf74d1e0b] Running
	I1211 23:57:02.869947   16270 system_pods.go:89] "metrics-server-85b7d694d7-lzpx2" [b6634516-aafd-439b-aa29-2feb3678dca5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:57:02.869961   16270 system_pods.go:89] "nvidia-device-plugin-daemonset-5r9hw" [cf07904f-5cff-48d7-a33a-06925d2e0b55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:57:02.869968   16270 system_pods.go:89] "registry-6b586f9694-ctpbw" [8f86a5b8-2ea9-4839-9462-cfe7303189b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:57:02.869975   16270 system_pods.go:89] "registry-creds-764b6fb674-rkslf" [d7a05cdd-171e-4999-9c92-9696f6943f0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:57:02.869983   16270 system_pods.go:89] "registry-proxy-9n7hj" [1f9bf5f3-1436-4f37-97c7-503e1b285750] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:57:02.869991   16270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4vpl2" [4fa25eb3-c777-444e-8df2-1f82dd6b89b4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:02.869999   16270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tgsc4" [c54cb689-98d5-4b4c-aef9-5a82bff58c3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:02.870006   16270 system_pods.go:89] "storage-provisioner" [fbe17644-ac82-4cd1-ba81-92edc1aa060f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1211 23:57:02.870024   16270 retry.go:31] will retry after 253.885498ms: missing components: kube-dns
	I1211 23:57:03.039435   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:03.142071   16270 system_pods.go:86] 20 kube-system pods found
	I1211 23:57:03.142112   16270 system_pods.go:89] "amd-gpu-device-plugin-t4nwx" [c2c64d04-3068-46a6-9981-66ab6ada3e01] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:57:03.142123   16270 system_pods.go:89] "coredns-66bc5c9577-xxwm5" [de02078f-e23e-4ca9-91e5-0f424f00cee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1211 23:57:03.142133   16270 system_pods.go:89] "csi-hostpath-attacher-0" [c7bae797-cf53-4b9c-97c3-0b0e891df4b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:57:03.142142   16270 system_pods.go:89] "csi-hostpath-resizer-0" [718b3422-383a-4f39-a77c-14135e707c4e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1211 23:57:03.142151   16270 system_pods.go:89] "csi-hostpathplugin-5nn2t" [dd0d3f24-7210-49df-84b7-95e27387db16] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:57:03.142160   16270 system_pods.go:89] "etcd-addons-758245" [35d2e8f3-6273-42f6-b0b6-69593941932e] Running
	I1211 23:57:03.142167   16270 system_pods.go:89] "kindnet-vctlp" [e3d24177-0c80-4919-b891-7f26f355f0f1] Running
	I1211 23:57:03.142173   16270 system_pods.go:89] "kube-apiserver-addons-758245" [24efbe93-266c-488f-9474-c8ebd3a90385] Running
	I1211 23:57:03.142178   16270 system_pods.go:89] "kube-controller-manager-addons-758245" [0eb5a6aa-9ac9-4b60-a5e5-58078366cd04] Running
	I1211 23:57:03.142186   16270 system_pods.go:89] "kube-ingress-dns-minikube" [ceaf163d-b482-416b-99af-e0419ec5a9a5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:57:03.142191   16270 system_pods.go:89] "kube-proxy-2ldz5" [aa397dad-1532-47a5-9c67-d54a6c33f5c9] Running
	I1211 23:57:03.142197   16270 system_pods.go:89] "kube-scheduler-addons-758245" [586dd9ce-332f-4ace-a65f-ef7bf74d1e0b] Running
	I1211 23:57:03.142204   16270 system_pods.go:89] "metrics-server-85b7d694d7-lzpx2" [b6634516-aafd-439b-aa29-2feb3678dca5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:57:03.142214   16270 system_pods.go:89] "nvidia-device-plugin-daemonset-5r9hw" [cf07904f-5cff-48d7-a33a-06925d2e0b55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:57:03.142224   16270 system_pods.go:89] "registry-6b586f9694-ctpbw" [8f86a5b8-2ea9-4839-9462-cfe7303189b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:57:03.142232   16270 system_pods.go:89] "registry-creds-764b6fb674-rkslf" [d7a05cdd-171e-4999-9c92-9696f6943f0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:57:03.142240   16270 system_pods.go:89] "registry-proxy-9n7hj" [1f9bf5f3-1436-4f37-97c7-503e1b285750] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:57:03.142250   16270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4vpl2" [4fa25eb3-c777-444e-8df2-1f82dd6b89b4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:03.142258   16270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tgsc4" [c54cb689-98d5-4b4c-aef9-5a82bff58c3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:03.142265   16270 system_pods.go:89] "storage-provisioner" [fbe17644-ac82-4cd1-ba81-92edc1aa060f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1211 23:57:03.142284   16270 retry.go:31] will retry after 385.118569ms: missing components: kube-dns
	I1211 23:57:03.240370   16270 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1211 23:57:03.240393   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:03.240599   16270 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1211 23:57:03.240619   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:03.283327   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:03.532664   16270 system_pods.go:86] 20 kube-system pods found
	I1211 23:57:03.532697   16270 system_pods.go:89] "amd-gpu-device-plugin-t4nwx" [c2c64d04-3068-46a6-9981-66ab6ada3e01] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:57:03.532705   16270 system_pods.go:89] "coredns-66bc5c9577-xxwm5" [de02078f-e23e-4ca9-91e5-0f424f00cee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1211 23:57:03.532712   16270 system_pods.go:89] "csi-hostpath-attacher-0" [c7bae797-cf53-4b9c-97c3-0b0e891df4b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:57:03.532717   16270 system_pods.go:89] "csi-hostpath-resizer-0" [718b3422-383a-4f39-a77c-14135e707c4e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1211 23:57:03.532723   16270 system_pods.go:89] "csi-hostpathplugin-5nn2t" [dd0d3f24-7210-49df-84b7-95e27387db16] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:57:03.532727   16270 system_pods.go:89] "etcd-addons-758245" [35d2e8f3-6273-42f6-b0b6-69593941932e] Running
	I1211 23:57:03.532731   16270 system_pods.go:89] "kindnet-vctlp" [e3d24177-0c80-4919-b891-7f26f355f0f1] Running
	I1211 23:57:03.532734   16270 system_pods.go:89] "kube-apiserver-addons-758245" [24efbe93-266c-488f-9474-c8ebd3a90385] Running
	I1211 23:57:03.532738   16270 system_pods.go:89] "kube-controller-manager-addons-758245" [0eb5a6aa-9ac9-4b60-a5e5-58078366cd04] Running
	I1211 23:57:03.532743   16270 system_pods.go:89] "kube-ingress-dns-minikube" [ceaf163d-b482-416b-99af-e0419ec5a9a5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:57:03.532750   16270 system_pods.go:89] "kube-proxy-2ldz5" [aa397dad-1532-47a5-9c67-d54a6c33f5c9] Running
	I1211 23:57:03.532754   16270 system_pods.go:89] "kube-scheduler-addons-758245" [586dd9ce-332f-4ace-a65f-ef7bf74d1e0b] Running
	I1211 23:57:03.532760   16270 system_pods.go:89] "metrics-server-85b7d694d7-lzpx2" [b6634516-aafd-439b-aa29-2feb3678dca5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:57:03.532768   16270 system_pods.go:89] "nvidia-device-plugin-daemonset-5r9hw" [cf07904f-5cff-48d7-a33a-06925d2e0b55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:57:03.532773   16270 system_pods.go:89] "registry-6b586f9694-ctpbw" [8f86a5b8-2ea9-4839-9462-cfe7303189b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:57:03.532778   16270 system_pods.go:89] "registry-creds-764b6fb674-rkslf" [d7a05cdd-171e-4999-9c92-9696f6943f0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:57:03.532782   16270 system_pods.go:89] "registry-proxy-9n7hj" [1f9bf5f3-1436-4f37-97c7-503e1b285750] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:57:03.532790   16270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4vpl2" [4fa25eb3-c777-444e-8df2-1f82dd6b89b4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:03.532795   16270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tgsc4" [c54cb689-98d5-4b4c-aef9-5a82bff58c3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:03.532801   16270 system_pods.go:89] "storage-provisioner" [fbe17644-ac82-4cd1-ba81-92edc1aa060f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1211 23:57:03.532819   16270 retry.go:31] will retry after 364.851391ms: missing components: kube-dns
	I1211 23:57:03.538725   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:03.660974   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:03.679090   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:03.786037   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:03.901572   16270 system_pods.go:86] 20 kube-system pods found
	I1211 23:57:03.901603   16270 system_pods.go:89] "amd-gpu-device-plugin-t4nwx" [c2c64d04-3068-46a6-9981-66ab6ada3e01] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:57:03.901609   16270 system_pods.go:89] "coredns-66bc5c9577-xxwm5" [de02078f-e23e-4ca9-91e5-0f424f00cee3] Running
	I1211 23:57:03.901617   16270 system_pods.go:89] "csi-hostpath-attacher-0" [c7bae797-cf53-4b9c-97c3-0b0e891df4b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:57:03.901623   16270 system_pods.go:89] "csi-hostpath-resizer-0" [718b3422-383a-4f39-a77c-14135e707c4e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1211 23:57:03.901628   16270 system_pods.go:89] "csi-hostpathplugin-5nn2t" [dd0d3f24-7210-49df-84b7-95e27387db16] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:57:03.901633   16270 system_pods.go:89] "etcd-addons-758245" [35d2e8f3-6273-42f6-b0b6-69593941932e] Running
	I1211 23:57:03.901637   16270 system_pods.go:89] "kindnet-vctlp" [e3d24177-0c80-4919-b891-7f26f355f0f1] Running
	I1211 23:57:03.901641   16270 system_pods.go:89] "kube-apiserver-addons-758245" [24efbe93-266c-488f-9474-c8ebd3a90385] Running
	I1211 23:57:03.901647   16270 system_pods.go:89] "kube-controller-manager-addons-758245" [0eb5a6aa-9ac9-4b60-a5e5-58078366cd04] Running
	I1211 23:57:03.901653   16270 system_pods.go:89] "kube-ingress-dns-minikube" [ceaf163d-b482-416b-99af-e0419ec5a9a5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:57:03.901658   16270 system_pods.go:89] "kube-proxy-2ldz5" [aa397dad-1532-47a5-9c67-d54a6c33f5c9] Running
	I1211 23:57:03.901661   16270 system_pods.go:89] "kube-scheduler-addons-758245" [586dd9ce-332f-4ace-a65f-ef7bf74d1e0b] Running
	I1211 23:57:03.901666   16270 system_pods.go:89] "metrics-server-85b7d694d7-lzpx2" [b6634516-aafd-439b-aa29-2feb3678dca5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:57:03.901673   16270 system_pods.go:89] "nvidia-device-plugin-daemonset-5r9hw" [cf07904f-5cff-48d7-a33a-06925d2e0b55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:57:03.901678   16270 system_pods.go:89] "registry-6b586f9694-ctpbw" [8f86a5b8-2ea9-4839-9462-cfe7303189b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:57:03.901721   16270 system_pods.go:89] "registry-creds-764b6fb674-rkslf" [d7a05cdd-171e-4999-9c92-9696f6943f0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:57:03.901732   16270 system_pods.go:89] "registry-proxy-9n7hj" [1f9bf5f3-1436-4f37-97c7-503e1b285750] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:57:03.901738   16270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4vpl2" [4fa25eb3-c777-444e-8df2-1f82dd6b89b4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:03.901745   16270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tgsc4" [c54cb689-98d5-4b4c-aef9-5a82bff58c3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:03.901750   16270 system_pods.go:89] "storage-provisioner" [fbe17644-ac82-4cd1-ba81-92edc1aa060f] Running
	I1211 23:57:03.901757   16270 system_pods.go:126] duration metric: took 1.037972891s to wait for k8s-apps to be running ...
	I1211 23:57:03.901766   16270 system_svc.go:44] waiting for kubelet service to be running ....
	I1211 23:57:03.901809   16270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 23:57:03.914188   16270 system_svc.go:56] duration metric: took 12.412668ms WaitForService to wait for kubelet
	I1211 23:57:03.914215   16270 kubeadm.go:587] duration metric: took 42.15670987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:57:03.914233   16270 node_conditions.go:102] verifying NodePressure condition ...
	I1211 23:57:03.916372   16270 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1211 23:57:03.916394   16270 node_conditions.go:123] node cpu capacity is 8
	I1211 23:57:03.916408   16270 node_conditions.go:105] duration metric: took 2.171236ms to run NodePressure ...
	I1211 23:57:03.916421   16270 start.go:242] waiting for startup goroutines ...
	I1211 23:57:04.040201   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:04.162073   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:04.180252   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:04.283990   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:04.540615   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:04.661030   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:04.679007   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:04.783106   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:05.040215   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:05.161523   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:05.262936   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:05.284887   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:05.539709   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:05.661487   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:05.680147   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:05.783959   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:06.039791   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:06.161559   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:06.179968   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:06.283703   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:06.539597   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:06.661253   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:06.679403   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:06.783813   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:07.039616   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:07.160889   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:07.178966   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:07.283418   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:07.540410   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:07.661510   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:07.680522   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:07.785059   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:08.040333   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:08.160622   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:08.179915   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:08.283601   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:08.539227   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:08.661165   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:08.679747   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:08.784391   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:09.039047   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:09.161678   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:09.180346   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:09.290735   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:09.539239   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:09.660928   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:09.679558   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:09.784397   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:10.039590   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:10.160960   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:10.179146   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:10.283673   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:10.539560   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:10.661192   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:10.679915   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:10.783839   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:11.039776   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:11.161659   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:11.180457   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:11.284694   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:11.539432   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:11.660671   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:11.679950   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:11.783706   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:12.040046   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:12.162761   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:12.180053   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:12.283985   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:12.541358   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:12.661262   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:12.680719   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:12.784145   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:13.039377   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:13.160955   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:13.179165   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:13.283602   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:13.539248   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:13.661225   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:13.680316   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:13.783562   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:14.039621   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:14.160982   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:14.179398   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:14.283915   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:14.539446   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:14.660598   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:14.680098   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:14.783276   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:15.040611   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:15.161273   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:15.180105   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:15.283942   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:15.540124   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:15.661802   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:15.679504   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:15.784188   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:16.039739   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:16.161593   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:16.180070   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:16.284822   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:16.539238   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:16.660310   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:16.679470   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:16.783734   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:17.039373   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:17.161296   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:17.180151   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:17.283701   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:17.540225   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:17.661003   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:17.761726   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:17.783644   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:18.039769   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:18.161559   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:18.180182   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:18.283352   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:18.540202   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:18.661598   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:18.679655   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:18.783707   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:19.039499   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:19.161254   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:19.180130   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:19.283902   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:19.539530   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:19.661177   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:19.679706   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:19.784452   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:20.040346   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:20.160644   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:20.180028   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:20.283410   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:20.538896   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:20.661147   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:20.679106   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:20.783435   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:21.038807   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:21.161014   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:21.179345   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:21.283568   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:21.538994   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:21.662006   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:21.679467   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:21.783791   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:22.039201   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:22.160711   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:22.180297   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:22.284007   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:22.539433   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:22.660800   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:22.679062   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:22.784194   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:23.039616   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:23.160951   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:23.179168   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:23.283634   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:23.539153   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:23.660339   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:23.679445   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:23.783620   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:24.038802   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:24.161300   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:24.179718   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:24.284542   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:24.540375   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:24.661941   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:24.681218   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:24.784080   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:25.040304   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:25.161099   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:25.179896   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:25.284901   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:25.540108   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:25.661616   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:25.762076   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:25.782883   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:26.041596   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:26.167630   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:26.179816   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:26.283571   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:26.539143   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:26.662298   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:26.679766   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:26.783940   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:27.052211   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:27.161717   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:27.262964   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:27.284377   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:27.540510   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:27.661638   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:27.680598   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:27.785738   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:28.041263   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:28.161640   16270 kapi.go:107] duration metric: took 1m5.503505975s to wait for kubernetes.io/minikube-addons=registry ...
	I1211 23:57:28.180917   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:28.284502   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:28.538987   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:28.679585   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:28.784373   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:29.065886   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:29.205118   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:29.284195   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:29.540311   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:29.680148   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:29.783762   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:30.039754   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:30.180514   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:30.283862   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:30.539085   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:30.679989   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:30.783892   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:31.039901   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:31.180292   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:31.283386   16270 kapi.go:107] duration metric: took 1m8.002672068s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1211 23:57:31.539876   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:31.679435   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:32.050813   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:32.179971   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:32.539656   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:32.680852   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:33.039566   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:33.180184   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:33.539695   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:33.679806   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:34.039433   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:34.180098   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:34.540146   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:34.679779   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:35.039495   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:35.180633   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:35.539608   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:35.680729   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:36.039284   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:36.180412   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:36.539966   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:36.679575   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:37.038997   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:37.179740   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:37.539581   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:37.680950   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:38.039067   16270 kapi.go:107] duration metric: took 1m8.002617213s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1211 23:57:38.042620   16270 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-758245 cluster.
	I1211 23:57:38.043999   16270 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1211 23:57:38.045172   16270 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1211 23:57:38.179907   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:38.683060   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:39.179242   16270 kapi.go:107] duration metric: took 1m15.502616024s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1211 23:57:39.180790   16270 out.go:179] * Enabled addons: nvidia-device-plugin, registry-creds, cloud-spanner, storage-provisioner, inspektor-gadget, amd-gpu-device-plugin, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1211 23:57:39.181710   16270 addons.go:530] duration metric: took 1m17.424183357s for enable addons: enabled=[nvidia-device-plugin registry-creds cloud-spanner storage-provisioner inspektor-gadget amd-gpu-device-plugin ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1211 23:57:39.181747   16270 start.go:247] waiting for cluster config update ...
	I1211 23:57:39.181767   16270 start.go:256] writing updated cluster config ...
	I1211 23:57:39.181997   16270 ssh_runner.go:195] Run: rm -f paused
	I1211 23:57:39.185762   16270 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1211 23:57:39.188343   16270 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xxwm5" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:39.191543   16270 pod_ready.go:94] pod "coredns-66bc5c9577-xxwm5" is "Ready"
	I1211 23:57:39.191559   16270 pod_ready.go:86] duration metric: took 3.187758ms for pod "coredns-66bc5c9577-xxwm5" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:39.192944   16270 pod_ready.go:83] waiting for pod "etcd-addons-758245" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:39.195813   16270 pod_ready.go:94] pod "etcd-addons-758245" is "Ready"
	I1211 23:57:39.195827   16270 pod_ready.go:86] duration metric: took 2.86476ms for pod "etcd-addons-758245" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:39.197348   16270 pod_ready.go:83] waiting for pod "kube-apiserver-addons-758245" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:39.200082   16270 pod_ready.go:94] pod "kube-apiserver-addons-758245" is "Ready"
	I1211 23:57:39.200101   16270 pod_ready.go:86] duration metric: took 2.735703ms for pod "kube-apiserver-addons-758245" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:39.201546   16270 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-758245" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:39.589447   16270 pod_ready.go:94] pod "kube-controller-manager-addons-758245" is "Ready"
	I1211 23:57:39.589485   16270 pod_ready.go:86] duration metric: took 387.907975ms for pod "kube-controller-manager-addons-758245" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:39.788833   16270 pod_ready.go:83] waiting for pod "kube-proxy-2ldz5" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:40.189497   16270 pod_ready.go:94] pod "kube-proxy-2ldz5" is "Ready"
	I1211 23:57:40.189519   16270 pod_ready.go:86] duration metric: took 400.664534ms for pod "kube-proxy-2ldz5" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:40.389318   16270 pod_ready.go:83] waiting for pod "kube-scheduler-addons-758245" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:40.789063   16270 pod_ready.go:94] pod "kube-scheduler-addons-758245" is "Ready"
	I1211 23:57:40.789093   16270 pod_ready.go:86] duration metric: took 399.74164ms for pod "kube-scheduler-addons-758245" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:40.789108   16270 pod_ready.go:40] duration metric: took 1.603316541s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1211 23:57:40.830425   16270 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1211 23:57:40.832093   16270 out.go:179] * Done! kubectl is now configured to use "addons-758245" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 00:00:28 addons-758245 crio[774]: time="2025-12-12T00:00:28.575105093Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-fgrcq/POD" id=36ef2485-e102-4ddf-801c-3ad5e7287839 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:00:28 addons-758245 crio[774]: time="2025-12-12T00:00:28.575188125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:00:28 addons-758245 crio[774]: time="2025-12-12T00:00:28.581352294Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-fgrcq Namespace:default ID:a3c56afcfc6ec52f3cb4b088b8ee73391cc025bfda0fa2ffad79e78f87fdda51 UID:d59f1a36-37f6-4470-8d30-c9d256324210 NetNS:/var/run/netns/0289f62d-6473-4b85-a5c0-5090947206d0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000702598}] Aliases:map[]}"
	Dec 12 00:00:28 addons-758245 crio[774]: time="2025-12-12T00:00:28.581384084Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-fgrcq to CNI network \"kindnet\" (type=ptp)"
	Dec 12 00:00:28 addons-758245 crio[774]: time="2025-12-12T00:00:28.591189046Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-fgrcq Namespace:default ID:a3c56afcfc6ec52f3cb4b088b8ee73391cc025bfda0fa2ffad79e78f87fdda51 UID:d59f1a36-37f6-4470-8d30-c9d256324210 NetNS:/var/run/netns/0289f62d-6473-4b85-a5c0-5090947206d0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000702598}] Aliases:map[]}"
	Dec 12 00:00:28 addons-758245 crio[774]: time="2025-12-12T00:00:28.591303913Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-fgrcq for CNI network kindnet (type=ptp)"
	Dec 12 00:00:28 addons-758245 crio[774]: time="2025-12-12T00:00:28.592062809Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 00:00:28 addons-758245 crio[774]: time="2025-12-12T00:00:28.592862421Z" level=info msg="Ran pod sandbox a3c56afcfc6ec52f3cb4b088b8ee73391cc025bfda0fa2ffad79e78f87fdda51 with infra container: default/hello-world-app-5d498dc89-fgrcq/POD" id=36ef2485-e102-4ddf-801c-3ad5e7287839 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:00:28 addons-758245 crio[774]: time="2025-12-12T00:00:28.593907713Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5d765104-d91d-416d-b384-0392b6a253d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:00:28 addons-758245 crio[774]: time="2025-12-12T00:00:28.594028682Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=5d765104-d91d-416d-b384-0392b6a253d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:00:28 addons-758245 crio[774]: time="2025-12-12T00:00:28.594062699Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=5d765104-d91d-416d-b384-0392b6a253d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:00:28 addons-758245 crio[774]: time="2025-12-12T00:00:28.594657014Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=20b58b70-64e8-4edd-886e-21a604452471 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:00:28 addons-758245 crio[774]: time="2025-12-12T00:00:28.600135515Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 12 00:00:29 addons-758245 crio[774]: time="2025-12-12T00:00:29.452336396Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=20b58b70-64e8-4edd-886e-21a604452471 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:00:29 addons-758245 crio[774]: time="2025-12-12T00:00:29.452885469Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=41830930-3693-4e38-a937-bf7cd43621a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:00:29 addons-758245 crio[774]: time="2025-12-12T00:00:29.454261149Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d0119021-d1c3-4366-9e4b-cef7a168cb40 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:00:29 addons-758245 crio[774]: time="2025-12-12T00:00:29.461336758Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-fgrcq/hello-world-app" id=ad2687ee-b909-4110-b7fa-be9b21672180 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:00:29 addons-758245 crio[774]: time="2025-12-12T00:00:29.461504961Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:00:29 addons-758245 crio[774]: time="2025-12-12T00:00:29.469021389Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:00:29 addons-758245 crio[774]: time="2025-12-12T00:00:29.469243715Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/280a2cd83d383fbe522a91944603094934fba4dab3d490820f0aaf986080c64b/merged/etc/passwd: no such file or directory"
	Dec 12 00:00:29 addons-758245 crio[774]: time="2025-12-12T00:00:29.469284477Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/280a2cd83d383fbe522a91944603094934fba4dab3d490820f0aaf986080c64b/merged/etc/group: no such file or directory"
	Dec 12 00:00:29 addons-758245 crio[774]: time="2025-12-12T00:00:29.469604441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:00:29 addons-758245 crio[774]: time="2025-12-12T00:00:29.507847787Z" level=info msg="Created container 363835e961e4ee813876aea82f07be8fa5407725960aba45ee979720671662b8: default/hello-world-app-5d498dc89-fgrcq/hello-world-app" id=ad2687ee-b909-4110-b7fa-be9b21672180 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:00:29 addons-758245 crio[774]: time="2025-12-12T00:00:29.508435047Z" level=info msg="Starting container: 363835e961e4ee813876aea82f07be8fa5407725960aba45ee979720671662b8" id=b1a38371-f2af-489f-bf82-ed7c0194bbfd name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:00:29 addons-758245 crio[774]: time="2025-12-12T00:00:29.510192437Z" level=info msg="Started container" PID=9632 containerID=363835e961e4ee813876aea82f07be8fa5407725960aba45ee979720671662b8 description=default/hello-world-app-5d498dc89-fgrcq/hello-world-app id=b1a38371-f2af-489f-bf82-ed7c0194bbfd name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3c56afcfc6ec52f3cb4b088b8ee73391cc025bfda0fa2ffad79e78f87fdda51
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	363835e961e4e       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   a3c56afcfc6ec       hello-world-app-5d498dc89-fgrcq             default
	bebb2b9dc0db5       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   6c925ef910f90       registry-creds-764b6fb674-rkslf             kube-system
	ed39958f6d6c3       public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c                                           2 minutes ago            Running             nginx                                    0                   5b6417969f058       nginx                                       default
	261d6be7407c9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   04414ff40faa4       busybox                                     default
	3732d2a3f8838       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   b7406976110bc       csi-hostpathplugin-5nn2t                    kube-system
	5cd186f4373ab       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   f765eaa715430       gcp-auth-78565c9fb4-4bv9l                   gcp-auth
	eb9ef1b7664a7       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   b7406976110bc       csi-hostpathplugin-5nn2t                    kube-system
	569f61ef928d0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   b7406976110bc       csi-hostpathplugin-5nn2t                    kube-system
	c3b2706600ae1       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   b7406976110bc       csi-hostpathplugin-5nn2t                    kube-system
	e659f5026c837       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            2 minutes ago            Running             gadget                                   0                   f3c1d42c1401f       gadget-bqt8q                                gadget
	664ac9c24ea7c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   b7406976110bc       csi-hostpathplugin-5nn2t                    kube-system
	2eed610f0b598       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago            Running             controller                               0                   ec5180c4baa6d       ingress-nginx-controller-85d4c799dd-p5j5s   ingress-nginx
	64e96cfc8140e       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago            Running             registry-proxy                           0                   368fa3ba4a572       registry-proxy-9n7hj                        kube-system
	335878b08bc59       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   e3cfa9c4ff142       nvidia-device-plugin-daemonset-5r9hw        kube-system
	6cbca1534843d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   b7406976110bc       csi-hostpathplugin-5nn2t                    kube-system
	e70723d59b2b1       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   8a81a3d12a757       amd-gpu-device-plugin-t4nwx                 kube-system
	2b49798329427       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago            Exited              patch                                    0                   8533d0fb19ad2       ingress-nginx-admission-patch-cd4bb         ingress-nginx
	7e0fe81bd1c04       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   baea1032a922e       snapshot-controller-7d9fbc56b8-4vpl2        kube-system
	a69fa017a8d1a       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   c74618a3c25db       csi-hostpath-attacher-0                     kube-system
	d4852af2ae305       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   f4ded4c1ed6d3       local-path-provisioner-648f6765c9-6wlm4     local-path-storage
	25bad7ed4d19e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago            Exited              create                                   0                   4a1582e49a1b7       ingress-nginx-admission-create-r7lnc        ingress-nginx
	8e6d8441c0b88       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   5c1b0fc983bec       snapshot-controller-7d9fbc56b8-tgsc4        kube-system
	b054779b4f384       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   87f53fc55af57       yakd-dashboard-5ff678cb9-9jv4p              yakd-dashboard
	aeaae182100e6       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   0afe76df49e5e       csi-hostpath-resizer-0                      kube-system
	b246671cb7ebd       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   a1c838608ad1a       metrics-server-85b7d694d7-lzpx2             kube-system
	533b2e5a9c2d2       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   cd2edf1bf3a73       kube-ingress-dns-minikube                   kube-system
	d00859d804fd2       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   f5daef4c23552       cloud-spanner-emulator-5bdddb765-ttz8g      default
	138f9c8dcb50c       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   95fb50f592ecc       registry-6b586f9694-ctpbw                   kube-system
	a095ba34edcca       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   ea3a14f569e09       coredns-66bc5c9577-xxwm5                    kube-system
	9a13d73cda53b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   48f5833231028       storage-provisioner                         kube-system
	746ec0a05954f       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             4 minutes ago            Running             kube-proxy                               0                   3ea4471cba438       kube-proxy-2ldz5                            kube-system
	9f62b444d09ef       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   e84c85e088226       kindnet-vctlp                               kube-system
	c4b3ad93ba2e0       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             4 minutes ago            Running             kube-apiserver                           0                   5bbfb6d65f68e       kube-apiserver-addons-758245                kube-system
	47879b4c9f9dd       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             4 minutes ago            Running             kube-controller-manager                  0                   bc07433e66205       kube-controller-manager-addons-758245       kube-system
	49a709e50508b       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             4 minutes ago            Running             etcd                                     0                   64e73b4884718       etcd-addons-758245                          kube-system
	0ff7242204c8f       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             4 minutes ago            Running             kube-scheduler                           0                   1330ac5787bff       kube-scheduler-addons-758245                kube-system
	
	
	==> coredns [a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e] <==
	[INFO] 10.244.0.22:42253 - 38035 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000130161s
	[INFO] 10.244.0.22:58179 - 3192 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005489818s
	[INFO] 10.244.0.22:39433 - 43415 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005649837s
	[INFO] 10.244.0.22:55774 - 43919 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004270109s
	[INFO] 10.244.0.22:49549 - 2075 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004477515s
	[INFO] 10.244.0.22:51959 - 4964 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004353981s
	[INFO] 10.244.0.22:52561 - 64015 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004507092s
	[INFO] 10.244.0.22:55871 - 30236 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002290435s
	[INFO] 10.244.0.22:52156 - 45926 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00233035s
	[INFO] 10.244.0.27:38399 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000217653s
	[INFO] 10.244.0.27:57960 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00016488s
	[INFO] 10.244.0.30:41471 - 28908 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000183906s
	[INFO] 10.244.0.30:43197 - 37112 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000253073s
	[INFO] 10.244.0.30:33846 - 65200 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000108544s
	[INFO] 10.244.0.30:45107 - 31549 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000161309s
	[INFO] 10.244.0.30:47794 - 50026 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000089244s
	[INFO] 10.244.0.30:35007 - 51563 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000137251s
	[INFO] 10.244.0.30:56900 - 42347 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004899118s
	[INFO] 10.244.0.30:36739 - 42436 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004951304s
	[INFO] 10.244.0.30:40362 - 35338 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004730768s
	[INFO] 10.244.0.30:60702 - 22144 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005287391s
	[INFO] 10.244.0.30:37752 - 41640 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004346159s
	[INFO] 10.244.0.30:41409 - 27414 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004929684s
	[INFO] 10.244.0.30:51171 - 63789 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.00168511s
	[INFO] 10.244.0.30:42587 - 34895 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001718419s
	
	
	==> describe nodes <==
	Name:               addons-758245
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-758245
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=addons-758245
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_11T23_56_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-758245
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-758245"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 11 Dec 2025 23:56:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-758245
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:00:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:00:21 +0000   Thu, 11 Dec 2025 23:56:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:00:21 +0000   Thu, 11 Dec 2025 23:56:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:00:21 +0000   Thu, 11 Dec 2025 23:56:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:00:21 +0000   Thu, 11 Dec 2025 23:57:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-758245
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                7478ade5-211a-4ace-866e-8b508dbc1779
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  default                     cloud-spanner-emulator-5bdddb765-ttz8g       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  default                     hello-world-app-5d498dc89-fgrcq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  gadget                      gadget-bqt8q                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  gcp-auth                    gcp-auth-78565c9fb4-4bv9l                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-p5j5s    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m6s
	  kube-system                 amd-gpu-device-plugin-t4nwx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  kube-system                 coredns-66bc5c9577-xxwm5                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m8s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 csi-hostpathplugin-5nn2t                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  kube-system                 etcd-addons-758245                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m14s
	  kube-system                 kindnet-vctlp                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m8s
	  kube-system                 kube-apiserver-addons-758245                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-controller-manager-addons-758245        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-2ldz5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-addons-758245                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 metrics-server-85b7d694d7-lzpx2              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m6s
	  kube-system                 nvidia-device-plugin-daemonset-5r9hw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  kube-system                 registry-6b586f9694-ctpbw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 registry-creds-764b6fb674-rkslf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 registry-proxy-9n7hj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  kube-system                 snapshot-controller-7d9fbc56b8-4vpl2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 snapshot-controller-7d9fbc56b8-tgsc4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  local-path-storage          local-path-provisioner-648f6765c9-6wlm4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-9jv4p               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m18s (x8 over 4m18s)  kubelet          Node addons-758245 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s (x8 over 4m18s)  kubelet          Node addons-758245 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s (x8 over 4m18s)  kubelet          Node addons-758245 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s                  kubelet          Node addons-758245 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s                  kubelet          Node addons-758245 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s                  kubelet          Node addons-758245 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m9s                   node-controller  Node addons-758245 event: Registered Node addons-758245 in Controller
	  Normal  NodeReady                3m27s                  kubelet          Node addons-758245 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47] <==
	{"level":"warn","ts":"2025-12-11T23:56:12.848195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.863385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.875690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.882640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.890787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.897722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.905344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.913175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.920223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.926926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.933962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.940704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.947776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.955065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.961061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.967936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.988461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.995949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:13.003567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:13.059994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:24.110764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:50.458901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:50.465314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:50.480890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:50.487323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43288","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [5cd186f4373abe8e8259e028de1c7e0ad7a5177f5742d52190634427325d42a7] <==
	2025/12/11 23:57:37 GCP Auth Webhook started!
	2025/12/11 23:57:41 Ready to marshal response ...
	2025/12/11 23:57:41 Ready to write response ...
	2025/12/11 23:57:41 Ready to marshal response ...
	2025/12/11 23:57:41 Ready to write response ...
	2025/12/11 23:57:41 Ready to marshal response ...
	2025/12/11 23:57:41 Ready to write response ...
	2025/12/11 23:57:50 Ready to marshal response ...
	2025/12/11 23:57:50 Ready to write response ...
	2025/12/11 23:57:50 Ready to marshal response ...
	2025/12/11 23:57:50 Ready to write response ...
	2025/12/11 23:57:57 Ready to marshal response ...
	2025/12/11 23:57:57 Ready to write response ...
	2025/12/11 23:57:59 Ready to marshal response ...
	2025/12/11 23:57:59 Ready to write response ...
	2025/12/11 23:58:00 Ready to marshal response ...
	2025/12/11 23:58:00 Ready to write response ...
	2025/12/11 23:58:05 Ready to marshal response ...
	2025/12/11 23:58:05 Ready to write response ...
	2025/12/11 23:58:28 Ready to marshal response ...
	2025/12/11 23:58:28 Ready to write response ...
	2025/12/12 00:00:28 Ready to marshal response ...
	2025/12/12 00:00:28 Ready to write response ...
	
	
	==> kernel <==
	 00:00:29 up 42 min,  0 user,  load average: 0.16, 0.56, 0.30
	Linux addons-758245 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc] <==
	I1211 23:58:22.592124       1 main.go:301] handling current node
	I1211 23:58:32.590333       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:58:32.590360       1 main.go:301] handling current node
	I1211 23:58:42.594866       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:58:42.594903       1 main.go:301] handling current node
	I1211 23:58:52.592802       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:58:52.592838       1 main.go:301] handling current node
	I1211 23:59:02.592704       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:59:02.592743       1 main.go:301] handling current node
	I1211 23:59:12.591703       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:59:12.591733       1 main.go:301] handling current node
	I1211 23:59:22.597789       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:59:22.597818       1 main.go:301] handling current node
	I1211 23:59:32.593466       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:59:32.593523       1 main.go:301] handling current node
	I1211 23:59:42.591873       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:59:42.591902       1 main.go:301] handling current node
	I1211 23:59:52.592550       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:59:52.592584       1 main.go:301] handling current node
	I1212 00:00:02.590837       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:00:02.590877       1 main.go:301] handling current node
	I1212 00:00:12.590771       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:00:12.590801       1 main.go:301] handling current node
	I1212 00:00:22.590173       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:00:22.590203       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69] <==
	E1211 23:57:14.846288       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.203.240:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.203.240:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.203.240:443: connect: connection refused" logger="UnhandledError"
	E1211 23:57:14.847651       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.203.240:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.203.240:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.203.240:443: connect: connection refused" logger="UnhandledError"
	E1211 23:57:14.853747       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.203.240:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.203.240:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.203.240:443: connect: connection refused" logger="UnhandledError"
	W1211 23:57:15.846977       1 handler_proxy.go:99] no RequestInfo found in the context
	W1211 23:57:15.847011       1 handler_proxy.go:99] no RequestInfo found in the context
	E1211 23:57:15.847025       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1211 23:57:15.847048       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1211 23:57:15.847091       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1211 23:57:15.848214       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1211 23:57:19.878714       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.203.240:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.203.240:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1211 23:57:19.878830       1 handler_proxy.go:99] no RequestInfo found in the context
	E1211 23:57:19.878867       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1211 23:57:19.889834       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1211 23:57:49.456688       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46696: use of closed network connection
	E1211 23:57:49.594752       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46726: use of closed network connection
	I1211 23:58:00.593318       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1211 23:58:00.788580       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.164.34"}
	I1211 23:58:12.633553       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1212 00:00:28.342366       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.174.217"}
	
	
	==> kube-controller-manager [47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e] <==
	I1211 23:56:20.444372       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1211 23:56:20.444382       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1211 23:56:20.444351       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1211 23:56:20.444388       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1211 23:56:20.444430       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1211 23:56:20.444442       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1211 23:56:20.444485       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1211 23:56:20.444487       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1211 23:56:20.444607       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1211 23:56:20.445035       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1211 23:56:20.445051       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1211 23:56:20.445859       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1211 23:56:20.446829       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1211 23:56:20.449331       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1211 23:56:20.458542       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1211 23:56:20.467060       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1211 23:56:23.033963       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1211 23:56:50.453354       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1211 23:56:50.453511       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1211 23:56:50.453546       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1211 23:56:50.472899       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1211 23:56:50.476017       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1211 23:56:50.554565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1211 23:56:50.576744       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1211 23:57:05.400604       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7] <==
	I1211 23:56:22.110162       1 server_linux.go:53] "Using iptables proxy"
	I1211 23:56:22.273415       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1211 23:56:22.376603       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1211 23:56:22.379647       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1211 23:56:22.379772       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 23:56:22.687004       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1211 23:56:22.687154       1 server_linux.go:132] "Using iptables Proxier"
	I1211 23:56:22.776865       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 23:56:22.785429       1 server.go:527] "Version info" version="v1.34.2"
	I1211 23:56:22.785710       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 23:56:22.787827       1 config.go:200] "Starting service config controller"
	I1211 23:56:22.788056       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1211 23:56:22.788685       1 config.go:106] "Starting endpoint slice config controller"
	I1211 23:56:22.789522       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1211 23:56:22.788784       1 config.go:309] "Starting node config controller"
	I1211 23:56:22.789602       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1211 23:56:22.789627       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1211 23:56:22.789264       1 config.go:403] "Starting serviceCIDR config controller"
	I1211 23:56:22.789671       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1211 23:56:22.889681       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1211 23:56:22.889735       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1211 23:56:22.891986       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead] <==
	E1211 23:56:13.463415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1211 23:56:13.463536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1211 23:56:13.463994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1211 23:56:13.464083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1211 23:56:13.464182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1211 23:56:13.464215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1211 23:56:13.464246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1211 23:56:13.464255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1211 23:56:13.464265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1211 23:56:13.464286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1211 23:56:13.464337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1211 23:56:13.464364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1211 23:56:13.464364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1211 23:56:13.464415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1211 23:56:13.464541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1211 23:56:14.297241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1211 23:56:14.302184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1211 23:56:14.355170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1211 23:56:14.384272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1211 23:56:14.464779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1211 23:56:14.487731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1211 23:56:14.591157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1211 23:56:14.638138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1211 23:56:14.651089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1211 23:56:17.061868       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 11 23:58:35 addons-758245 kubelet[1293]: I1211 23:58:35.655443    1293 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9n7hj" secret="" err="secret \"gcp-auth\" not found"
	Dec 11 23:58:35 addons-758245 kubelet[1293]: I1211 23:58:35.823108    1293 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2q5j\" (UniqueName: \"kubernetes.io/projected/e26cfb1c-9b62-4431-b26c-243a3a24508f-kube-api-access-j2q5j\") pod \"e26cfb1c-9b62-4431-b26c-243a3a24508f\" (UID: \"e26cfb1c-9b62-4431-b26c-243a3a24508f\") "
	Dec 11 23:58:35 addons-758245 kubelet[1293]: I1211 23:58:35.823164    1293 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e26cfb1c-9b62-4431-b26c-243a3a24508f-gcp-creds\") pod \"e26cfb1c-9b62-4431-b26c-243a3a24508f\" (UID: \"e26cfb1c-9b62-4431-b26c-243a3a24508f\") "
	Dec 11 23:58:35 addons-758245 kubelet[1293]: I1211 23:58:35.823278    1293 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^49b4ecf3-d6ed-11f0-b44c-eaa43ffffb2c\") pod \"e26cfb1c-9b62-4431-b26c-243a3a24508f\" (UID: \"e26cfb1c-9b62-4431-b26c-243a3a24508f\") "
	Dec 11 23:58:35 addons-758245 kubelet[1293]: I1211 23:58:35.823301    1293 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e26cfb1c-9b62-4431-b26c-243a3a24508f-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "e26cfb1c-9b62-4431-b26c-243a3a24508f" (UID: "e26cfb1c-9b62-4431-b26c-243a3a24508f"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 11 23:58:35 addons-758245 kubelet[1293]: I1211 23:58:35.823423    1293 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e26cfb1c-9b62-4431-b26c-243a3a24508f-gcp-creds\") on node \"addons-758245\" DevicePath \"\""
	Dec 11 23:58:35 addons-758245 kubelet[1293]: I1211 23:58:35.825435    1293 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e26cfb1c-9b62-4431-b26c-243a3a24508f-kube-api-access-j2q5j" (OuterVolumeSpecName: "kube-api-access-j2q5j") pod "e26cfb1c-9b62-4431-b26c-243a3a24508f" (UID: "e26cfb1c-9b62-4431-b26c-243a3a24508f"). InnerVolumeSpecName "kube-api-access-j2q5j". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 11 23:58:35 addons-758245 kubelet[1293]: I1211 23:58:35.826851    1293 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^49b4ecf3-d6ed-11f0-b44c-eaa43ffffb2c" (OuterVolumeSpecName: "task-pv-storage") pod "e26cfb1c-9b62-4431-b26c-243a3a24508f" (UID: "e26cfb1c-9b62-4431-b26c-243a3a24508f"). InnerVolumeSpecName "pvc-2dabe1b1-6135-4c38-b87a-7eadec1a2de7". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 11 23:58:35 addons-758245 kubelet[1293]: I1211 23:58:35.924314    1293 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-2dabe1b1-6135-4c38-b87a-7eadec1a2de7\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^49b4ecf3-d6ed-11f0-b44c-eaa43ffffb2c\") on node \"addons-758245\" "
	Dec 11 23:58:35 addons-758245 kubelet[1293]: I1211 23:58:35.924344    1293 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j2q5j\" (UniqueName: \"kubernetes.io/projected/e26cfb1c-9b62-4431-b26c-243a3a24508f-kube-api-access-j2q5j\") on node \"addons-758245\" DevicePath \"\""
	Dec 11 23:58:35 addons-758245 kubelet[1293]: I1211 23:58:35.928330    1293 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-2dabe1b1-6135-4c38-b87a-7eadec1a2de7" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^49b4ecf3-d6ed-11f0-b44c-eaa43ffffb2c") on node "addons-758245"
	Dec 11 23:58:36 addons-758245 kubelet[1293]: I1211 23:58:36.024778    1293 reconciler_common.go:299] "Volume detached for volume \"pvc-2dabe1b1-6135-4c38-b87a-7eadec1a2de7\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^49b4ecf3-d6ed-11f0-b44c-eaa43ffffb2c\") on node \"addons-758245\" DevicePath \"\""
	Dec 11 23:58:36 addons-758245 kubelet[1293]: I1211 23:58:36.190303    1293 scope.go:117] "RemoveContainer" containerID="f0e39bc6d71a02053bdb4a72ece074c9d25f35bb3189cb1004d0742106cc1b82"
	Dec 11 23:58:36 addons-758245 kubelet[1293]: I1211 23:58:36.199942    1293 scope.go:117] "RemoveContainer" containerID="f0e39bc6d71a02053bdb4a72ece074c9d25f35bb3189cb1004d0742106cc1b82"
	Dec 11 23:58:36 addons-758245 kubelet[1293]: E1211 23:58:36.200342    1293 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e39bc6d71a02053bdb4a72ece074c9d25f35bb3189cb1004d0742106cc1b82\": container with ID starting with f0e39bc6d71a02053bdb4a72ece074c9d25f35bb3189cb1004d0742106cc1b82 not found: ID does not exist" containerID="f0e39bc6d71a02053bdb4a72ece074c9d25f35bb3189cb1004d0742106cc1b82"
	Dec 11 23:58:36 addons-758245 kubelet[1293]: I1211 23:58:36.200395    1293 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e39bc6d71a02053bdb4a72ece074c9d25f35bb3189cb1004d0742106cc1b82"} err="failed to get container status \"f0e39bc6d71a02053bdb4a72ece074c9d25f35bb3189cb1004d0742106cc1b82\": rpc error: code = NotFound desc = could not find container \"f0e39bc6d71a02053bdb4a72ece074c9d25f35bb3189cb1004d0742106cc1b82\": container with ID starting with f0e39bc6d71a02053bdb4a72ece074c9d25f35bb3189cb1004d0742106cc1b82 not found: ID does not exist"
	Dec 11 23:58:37 addons-758245 kubelet[1293]: I1211 23:58:37.657006    1293 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e26cfb1c-9b62-4431-b26c-243a3a24508f" path="/var/lib/kubelet/pods/e26cfb1c-9b62-4431-b26c-243a3a24508f/volumes"
	Dec 11 23:58:50 addons-758245 kubelet[1293]: I1211 23:58:50.653912    1293 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-t4nwx" secret="" err="secret \"gcp-auth\" not found"
	Dec 11 23:58:56 addons-758245 kubelet[1293]: I1211 23:58:56.654680    1293 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5r9hw" secret="" err="secret \"gcp-auth\" not found"
	Dec 11 23:59:54 addons-758245 kubelet[1293]: I1211 23:59:54.654242    1293 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9n7hj" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:00:01 addons-758245 kubelet[1293]: I1212 00:00:01.654585    1293 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5r9hw" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:00:19 addons-758245 kubelet[1293]: I1212 00:00:19.654402    1293 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-t4nwx" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:00:28 addons-758245 kubelet[1293]: I1212 00:00:28.372145    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j9vm\" (UniqueName: \"kubernetes.io/projected/d59f1a36-37f6-4470-8d30-c9d256324210-kube-api-access-7j9vm\") pod \"hello-world-app-5d498dc89-fgrcq\" (UID: \"d59f1a36-37f6-4470-8d30-c9d256324210\") " pod="default/hello-world-app-5d498dc89-fgrcq"
	Dec 12 00:00:28 addons-758245 kubelet[1293]: I1212 00:00:28.372225    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d59f1a36-37f6-4470-8d30-c9d256324210-gcp-creds\") pod \"hello-world-app-5d498dc89-fgrcq\" (UID: \"d59f1a36-37f6-4470-8d30-c9d256324210\") " pod="default/hello-world-app-5d498dc89-fgrcq"
	Dec 12 00:00:29 addons-758245 kubelet[1293]: I1212 00:00:29.595324    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-fgrcq" podStartSLOduration=0.73598147 podStartE2EDuration="1.595303136s" podCreationTimestamp="2025-12-12 00:00:28 +0000 UTC" firstStartedPulling="2025-12-12 00:00:28.594328161 +0000 UTC m=+253.015522869" lastFinishedPulling="2025-12-12 00:00:29.45364982 +0000 UTC m=+253.874844535" observedRunningTime="2025-12-12 00:00:29.594609996 +0000 UTC m=+254.015804726" watchObservedRunningTime="2025-12-12 00:00:29.595303136 +0000 UTC m=+254.016497865"
	
	
	==> storage-provisioner [9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98] <==
	W1212 00:00:05.854094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:07.857410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:07.861114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:09.864427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:09.868637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:11.870604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:11.874436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:13.877302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:13.881662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:15.884669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:15.888020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:17.890716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:17.895234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:19.898014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:19.902688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:21.905200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:21.909407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:23.912307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:23.915734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:25.918362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:25.921798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:27.924487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:27.928206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:29.930860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:00:29.933920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-758245 -n addons-758245
helpers_test.go:270: (dbg) Run:  kubectl --context addons-758245 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-r7lnc ingress-nginx-admission-patch-cd4bb
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-758245 describe pod ingress-nginx-admission-create-r7lnc ingress-nginx-admission-patch-cd4bb
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-758245 describe pod ingress-nginx-admission-create-r7lnc ingress-nginx-admission-patch-cd4bb: exit status 1 (53.808146ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-r7lnc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-cd4bb" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-758245 describe pod ingress-nginx-admission-create-r7lnc ingress-nginx-admission-patch-cd4bb: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758245 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (233.247697ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:00:30.636216   30723 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:00:30.636579   30723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:00:30.636589   30723 out.go:374] Setting ErrFile to fd 2...
	I1212 00:00:30.636593   30723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:00:30.636781   30723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:00:30.637065   30723 mustload.go:66] Loading cluster: addons-758245
	I1212 00:00:30.637374   30723 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:00:30.637393   30723 addons.go:622] checking whether the cluster is paused
	I1212 00:00:30.637469   30723 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:00:30.637495   30723 host.go:66] Checking if "addons-758245" exists ...
	I1212 00:00:30.637846   30723 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1212 00:00:30.656128   30723 ssh_runner.go:195] Run: systemctl --version
	I1212 00:00:30.656177   30723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1212 00:00:30.673899   30723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1212 00:00:30.766414   30723 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:00:30.766524   30723 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:00:30.793682   30723 cri.go:89] found id: "bebb2b9dc0db586ac1f726cd9cf64469bb0451cec9c7113a8be04e821596c916"
	I1212 00:00:30.793705   30723 cri.go:89] found id: "3732d2a3f883872d29a8fa95430ad51d25831fcfea4e55c7896a327d96599bcf"
	I1212 00:00:30.793711   30723 cri.go:89] found id: "eb9ef1b7664a78fa036df7cd1e340b260c6103f0043a248e15b8c4f874624584"
	I1212 00:00:30.793715   30723 cri.go:89] found id: "569f61ef928d0f8bcfb0087922cfff40b1d5e27ac23e6c818e3a160cce92903a"
	I1212 00:00:30.793720   30723 cri.go:89] found id: "c3b2706600ae15c248169ff8e9696d286596bb41798aadd3ed7d5f04ddeaeb09"
	I1212 00:00:30.793726   30723 cri.go:89] found id: "664ac9c24ea7c293c526b1eee38f31561cb464c0f1fb15991e83439838893a9f"
	I1212 00:00:30.793730   30723 cri.go:89] found id: "64e96cfc8140e0ea508cbaa4b1f46f6a77ac38ca8dedd5a6ac707b7d8a935013"
	I1212 00:00:30.793734   30723 cri.go:89] found id: "335878b08bc5952d78cfe80774396d448d7e43cfc4df78fe9b41b92ea329360a"
	I1212 00:00:30.793739   30723 cri.go:89] found id: "6cbca1534843d1584cdf90b2c3a08b630e41bed36ba8785830272efa337811be"
	I1212 00:00:30.793747   30723 cri.go:89] found id: "e70723d59b2b179aeebdf126b5c686f98bc9e70ce305f547af722fcfa5f6570b"
	I1212 00:00:30.793755   30723 cri.go:89] found id: "7e0fe81bd1c04f2678ef0211090a7a638b8f1cde3d56e0048932736929dbfa94"
	I1212 00:00:30.793759   30723 cri.go:89] found id: "a69fa017a8d1a4e1e016c291adabf69f06139db6064f64f4696832dec41770b5"
	I1212 00:00:30.793764   30723 cri.go:89] found id: "8e6d8441c0b88b4af7d7ace2ca8455346456cb30cd8105a796edc75453a87f91"
	I1212 00:00:30.793769   30723 cri.go:89] found id: "aeaae182100e69fb94d63381a76545d31a0d9d27e8df794dcf898674b0374925"
	I1212 00:00:30.793778   30723 cri.go:89] found id: "b246671cb7ebd1332e264290b1d14d25b693cac91d9b52fd3540285bf08677f6"
	I1212 00:00:30.793787   30723 cri.go:89] found id: "533b2e5a9c2d28cc131486b31c27bcf25934c32a40a567192995ee18952d1f74"
	I1212 00:00:30.793794   30723 cri.go:89] found id: "138f9c8dcb50c0ecbf735235efdec9c471f54bceb1f758e03d1f66245eb03e52"
	I1212 00:00:30.793801   30723 cri.go:89] found id: "a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e"
	I1212 00:00:30.793805   30723 cri.go:89] found id: "9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98"
	I1212 00:00:30.793809   30723 cri.go:89] found id: "746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7"
	I1212 00:00:30.793815   30723 cri.go:89] found id: "9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc"
	I1212 00:00:30.793834   30723 cri.go:89] found id: "c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69"
	I1212 00:00:30.793856   30723 cri.go:89] found id: "47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e"
	I1212 00:00:30.793865   30723 cri.go:89] found id: "49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47"
	I1212 00:00:30.793871   30723 cri.go:89] found id: "0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead"
	I1212 00:00:30.793878   30723 cri.go:89] found id: ""
	I1212 00:00:30.793920   30723 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:00:30.807179   30723 out.go:203] 
	W1212 00:00:30.808280   30723 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:00:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:00:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:00:30.808295   30723 out.go:285] * 
	* 
	W1212 00:00:30.811258   30723 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:00:30.812400   30723 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-758245 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758245 addons disable ingress --alsologtostderr -v=1: exit status 11 (230.247989ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:00:30.869409   30788 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:00:30.869573   30788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:00:30.869586   30788 out.go:374] Setting ErrFile to fd 2...
	I1212 00:00:30.869593   30788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:00:30.869860   30788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:00:30.870171   30788 mustload.go:66] Loading cluster: addons-758245
	I1212 00:00:30.870645   30788 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:00:30.870670   30788 addons.go:622] checking whether the cluster is paused
	I1212 00:00:30.870795   30788 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:00:30.870811   30788 host.go:66] Checking if "addons-758245" exists ...
	I1212 00:00:30.871311   30788 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1212 00:00:30.888690   30788 ssh_runner.go:195] Run: systemctl --version
	I1212 00:00:30.888750   30788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1212 00:00:30.904648   30788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1212 00:00:30.997253   30788 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:00:30.997339   30788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:00:31.024833   30788 cri.go:89] found id: "bebb2b9dc0db586ac1f726cd9cf64469bb0451cec9c7113a8be04e821596c916"
	I1212 00:00:31.024850   30788 cri.go:89] found id: "3732d2a3f883872d29a8fa95430ad51d25831fcfea4e55c7896a327d96599bcf"
	I1212 00:00:31.024854   30788 cri.go:89] found id: "eb9ef1b7664a78fa036df7cd1e340b260c6103f0043a248e15b8c4f874624584"
	I1212 00:00:31.024857   30788 cri.go:89] found id: "569f61ef928d0f8bcfb0087922cfff40b1d5e27ac23e6c818e3a160cce92903a"
	I1212 00:00:31.024860   30788 cri.go:89] found id: "c3b2706600ae15c248169ff8e9696d286596bb41798aadd3ed7d5f04ddeaeb09"
	I1212 00:00:31.024863   30788 cri.go:89] found id: "664ac9c24ea7c293c526b1eee38f31561cb464c0f1fb15991e83439838893a9f"
	I1212 00:00:31.024865   30788 cri.go:89] found id: "64e96cfc8140e0ea508cbaa4b1f46f6a77ac38ca8dedd5a6ac707b7d8a935013"
	I1212 00:00:31.024868   30788 cri.go:89] found id: "335878b08bc5952d78cfe80774396d448d7e43cfc4df78fe9b41b92ea329360a"
	I1212 00:00:31.024872   30788 cri.go:89] found id: "6cbca1534843d1584cdf90b2c3a08b630e41bed36ba8785830272efa337811be"
	I1212 00:00:31.024880   30788 cri.go:89] found id: "e70723d59b2b179aeebdf126b5c686f98bc9e70ce305f547af722fcfa5f6570b"
	I1212 00:00:31.024886   30788 cri.go:89] found id: "7e0fe81bd1c04f2678ef0211090a7a638b8f1cde3d56e0048932736929dbfa94"
	I1212 00:00:31.024889   30788 cri.go:89] found id: "a69fa017a8d1a4e1e016c291adabf69f06139db6064f64f4696832dec41770b5"
	I1212 00:00:31.024892   30788 cri.go:89] found id: "8e6d8441c0b88b4af7d7ace2ca8455346456cb30cd8105a796edc75453a87f91"
	I1212 00:00:31.024895   30788 cri.go:89] found id: "aeaae182100e69fb94d63381a76545d31a0d9d27e8df794dcf898674b0374925"
	I1212 00:00:31.024898   30788 cri.go:89] found id: "b246671cb7ebd1332e264290b1d14d25b693cac91d9b52fd3540285bf08677f6"
	I1212 00:00:31.024908   30788 cri.go:89] found id: "533b2e5a9c2d28cc131486b31c27bcf25934c32a40a567192995ee18952d1f74"
	I1212 00:00:31.024915   30788 cri.go:89] found id: "138f9c8dcb50c0ecbf735235efdec9c471f54bceb1f758e03d1f66245eb03e52"
	I1212 00:00:31.024920   30788 cri.go:89] found id: "a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e"
	I1212 00:00:31.024923   30788 cri.go:89] found id: "9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98"
	I1212 00:00:31.024926   30788 cri.go:89] found id: "746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7"
	I1212 00:00:31.024931   30788 cri.go:89] found id: "9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc"
	I1212 00:00:31.024934   30788 cri.go:89] found id: "c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69"
	I1212 00:00:31.024936   30788 cri.go:89] found id: "47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e"
	I1212 00:00:31.024939   30788 cri.go:89] found id: "49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47"
	I1212 00:00:31.024941   30788 cri.go:89] found id: "0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead"
	I1212 00:00:31.024943   30788 cri.go:89] found id: ""
	I1212 00:00:31.024978   30788 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:00:31.038274   30788 out.go:203] 
	W1212 00:00:31.039372   30788 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:00:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:00:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:00:31.039387   30788 out.go:285] * 
	* 
	W1212 00:00:31.042280   30788 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:00:31.043368   30788 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-758245 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (150.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-bqt8q" [a33942c8-b457-48f7-992e-b64ac05f73b2] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.002912303s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758245 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (279.266447ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 23:58:02.969466   26742 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:58:02.969635   26742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:02.969645   26742 out.go:374] Setting ErrFile to fd 2...
	I1211 23:58:02.969649   26742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:02.969829   26742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:58:02.970060   26742 mustload.go:66] Loading cluster: addons-758245
	I1211 23:58:02.970350   26742 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:02.970366   26742 addons.go:622] checking whether the cluster is paused
	I1211 23:58:02.970450   26742 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:02.970507   26742 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:58:02.970860   26742 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:58:02.989116   26742 ssh_runner.go:195] Run: systemctl --version
	I1211 23:58:02.989175   26742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:58:03.007131   26742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:58:03.103043   26742 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:58:03.103141   26742 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:58:03.132350   26742 cri.go:89] found id: "3732d2a3f883872d29a8fa95430ad51d25831fcfea4e55c7896a327d96599bcf"
	I1211 23:58:03.132370   26742 cri.go:89] found id: "eb9ef1b7664a78fa036df7cd1e340b260c6103f0043a248e15b8c4f874624584"
	I1211 23:58:03.132376   26742 cri.go:89] found id: "569f61ef928d0f8bcfb0087922cfff40b1d5e27ac23e6c818e3a160cce92903a"
	I1211 23:58:03.132381   26742 cri.go:89] found id: "c3b2706600ae15c248169ff8e9696d286596bb41798aadd3ed7d5f04ddeaeb09"
	I1211 23:58:03.132385   26742 cri.go:89] found id: "664ac9c24ea7c293c526b1eee38f31561cb464c0f1fb15991e83439838893a9f"
	I1211 23:58:03.132390   26742 cri.go:89] found id: "64e96cfc8140e0ea508cbaa4b1f46f6a77ac38ca8dedd5a6ac707b7d8a935013"
	I1211 23:58:03.132394   26742 cri.go:89] found id: "335878b08bc5952d78cfe80774396d448d7e43cfc4df78fe9b41b92ea329360a"
	I1211 23:58:03.132399   26742 cri.go:89] found id: "6cbca1534843d1584cdf90b2c3a08b630e41bed36ba8785830272efa337811be"
	I1211 23:58:03.132404   26742 cri.go:89] found id: "e70723d59b2b179aeebdf126b5c686f98bc9e70ce305f547af722fcfa5f6570b"
	I1211 23:58:03.132412   26742 cri.go:89] found id: "7e0fe81bd1c04f2678ef0211090a7a638b8f1cde3d56e0048932736929dbfa94"
	I1211 23:58:03.132417   26742 cri.go:89] found id: "a69fa017a8d1a4e1e016c291adabf69f06139db6064f64f4696832dec41770b5"
	I1211 23:58:03.132423   26742 cri.go:89] found id: "8e6d8441c0b88b4af7d7ace2ca8455346456cb30cd8105a796edc75453a87f91"
	I1211 23:58:03.132430   26742 cri.go:89] found id: "aeaae182100e69fb94d63381a76545d31a0d9d27e8df794dcf898674b0374925"
	I1211 23:58:03.132436   26742 cri.go:89] found id: "b246671cb7ebd1332e264290b1d14d25b693cac91d9b52fd3540285bf08677f6"
	I1211 23:58:03.132452   26742 cri.go:89] found id: "533b2e5a9c2d28cc131486b31c27bcf25934c32a40a567192995ee18952d1f74"
	I1211 23:58:03.132468   26742 cri.go:89] found id: "138f9c8dcb50c0ecbf735235efdec9c471f54bceb1f758e03d1f66245eb03e52"
	I1211 23:58:03.132486   26742 cri.go:89] found id: "a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e"
	I1211 23:58:03.132491   26742 cri.go:89] found id: "9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98"
	I1211 23:58:03.132495   26742 cri.go:89] found id: "746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7"
	I1211 23:58:03.132500   26742 cri.go:89] found id: "9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc"
	I1211 23:58:03.132509   26742 cri.go:89] found id: "c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69"
	I1211 23:58:03.132514   26742 cri.go:89] found id: "47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e"
	I1211 23:58:03.132520   26742 cri.go:89] found id: "49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47"
	I1211 23:58:03.132525   26742 cri.go:89] found id: "0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead"
	I1211 23:58:03.132533   26742 cri.go:89] found id: ""
	I1211 23:58:03.132584   26742 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 23:58:03.177236   26742 out.go:203] 
	W1211 23:58:03.183109   26742 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1211 23:58:03.183137   26742 out.go:285] * 
	* 
	W1211 23:58:03.186164   26742 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 23:58:03.187249   26742 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-758245 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 2.724951ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-lzpx2" [b6634516-aafd-439b-aa29-2feb3678dca5] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003066986s
addons_test.go:465: (dbg) Run:  kubectl --context addons-758245 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758245 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (231.132452ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 23:58:00.183458   26000 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:58:00.183763   26000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:00.183775   26000 out.go:374] Setting ErrFile to fd 2...
	I1211 23:58:00.183779   26000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:00.183964   26000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:58:00.184217   26000 mustload.go:66] Loading cluster: addons-758245
	I1211 23:58:00.184542   26000 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:00.184563   26000 addons.go:622] checking whether the cluster is paused
	I1211 23:58:00.184687   26000 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:00.184707   26000 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:58:00.185190   26000 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:58:00.202591   26000 ssh_runner.go:195] Run: systemctl --version
	I1211 23:58:00.202642   26000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:58:00.219393   26000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:58:00.312963   26000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:58:00.313023   26000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:58:00.340522   26000 cri.go:89] found id: "3732d2a3f883872d29a8fa95430ad51d25831fcfea4e55c7896a327d96599bcf"
	I1211 23:58:00.340550   26000 cri.go:89] found id: "eb9ef1b7664a78fa036df7cd1e340b260c6103f0043a248e15b8c4f874624584"
	I1211 23:58:00.340555   26000 cri.go:89] found id: "569f61ef928d0f8bcfb0087922cfff40b1d5e27ac23e6c818e3a160cce92903a"
	I1211 23:58:00.340558   26000 cri.go:89] found id: "c3b2706600ae15c248169ff8e9696d286596bb41798aadd3ed7d5f04ddeaeb09"
	I1211 23:58:00.340561   26000 cri.go:89] found id: "664ac9c24ea7c293c526b1eee38f31561cb464c0f1fb15991e83439838893a9f"
	I1211 23:58:00.340565   26000 cri.go:89] found id: "64e96cfc8140e0ea508cbaa4b1f46f6a77ac38ca8dedd5a6ac707b7d8a935013"
	I1211 23:58:00.340568   26000 cri.go:89] found id: "335878b08bc5952d78cfe80774396d448d7e43cfc4df78fe9b41b92ea329360a"
	I1211 23:58:00.340570   26000 cri.go:89] found id: "6cbca1534843d1584cdf90b2c3a08b630e41bed36ba8785830272efa337811be"
	I1211 23:58:00.340573   26000 cri.go:89] found id: "e70723d59b2b179aeebdf126b5c686f98bc9e70ce305f547af722fcfa5f6570b"
	I1211 23:58:00.340583   26000 cri.go:89] found id: "7e0fe81bd1c04f2678ef0211090a7a638b8f1cde3d56e0048932736929dbfa94"
	I1211 23:58:00.340587   26000 cri.go:89] found id: "a69fa017a8d1a4e1e016c291adabf69f06139db6064f64f4696832dec41770b5"
	I1211 23:58:00.340589   26000 cri.go:89] found id: "8e6d8441c0b88b4af7d7ace2ca8455346456cb30cd8105a796edc75453a87f91"
	I1211 23:58:00.340592   26000 cri.go:89] found id: "aeaae182100e69fb94d63381a76545d31a0d9d27e8df794dcf898674b0374925"
	I1211 23:58:00.340595   26000 cri.go:89] found id: "b246671cb7ebd1332e264290b1d14d25b693cac91d9b52fd3540285bf08677f6"
	I1211 23:58:00.340597   26000 cri.go:89] found id: "533b2e5a9c2d28cc131486b31c27bcf25934c32a40a567192995ee18952d1f74"
	I1211 23:58:00.340608   26000 cri.go:89] found id: "138f9c8dcb50c0ecbf735235efdec9c471f54bceb1f758e03d1f66245eb03e52"
	I1211 23:58:00.340615   26000 cri.go:89] found id: "a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e"
	I1211 23:58:00.340619   26000 cri.go:89] found id: "9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98"
	I1211 23:58:00.340622   26000 cri.go:89] found id: "746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7"
	I1211 23:58:00.340625   26000 cri.go:89] found id: "9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc"
	I1211 23:58:00.340627   26000 cri.go:89] found id: "c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69"
	I1211 23:58:00.340630   26000 cri.go:89] found id: "47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e"
	I1211 23:58:00.340633   26000 cri.go:89] found id: "49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47"
	I1211 23:58:00.340636   26000 cri.go:89] found id: "0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead"
	I1211 23:58:00.340639   26000 cri.go:89] found id: ""
	I1211 23:58:00.340685   26000 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 23:58:00.353544   26000 out.go:203] 
	W1211 23:58:00.354523   26000 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1211 23:58:00.354537   26000 out.go:285] * 
	* 
	W1211 23:58:00.357382   26000 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 23:58:00.358401   26000 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-758245 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1211 23:57:52.293832   14503 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1211 23:57:52.297410   14503 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1211 23:57:52.297442   14503 kapi.go:107] duration metric: took 3.624175ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.637414ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-758245 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-758245 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [f526cd53-0d4a-4176-bd2c-b213452ae10d] Pending
helpers_test.go:353: "task-pv-pod" [f526cd53-0d4a-4176-bd2c-b213452ae10d] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003145928s
addons_test.go:574: (dbg) Run:  kubectl --context addons-758245 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-758245 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-758245 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-758245 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-758245 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-758245 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-758245 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [e26cfb1c-9b62-4431-b26c-243a3a24508f] Pending
helpers_test.go:353: "task-pv-pod-restore" [e26cfb1c-9b62-4431-b26c-243a3a24508f] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.002985354s
addons_test.go:616: (dbg) Run:  kubectl --context addons-758245 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-758245 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-758245 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758245 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (228.626826ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 23:58:36.578863   28428 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:58:36.578984   28428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:36.578993   28428 out.go:374] Setting ErrFile to fd 2...
	I1211 23:58:36.578997   28428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:36.579176   28428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:58:36.579417   28428 mustload.go:66] Loading cluster: addons-758245
	I1211 23:58:36.579736   28428 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:36.579752   28428 addons.go:622] checking whether the cluster is paused
	I1211 23:58:36.579829   28428 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:36.579840   28428 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:58:36.580162   28428 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:58:36.597754   28428 ssh_runner.go:195] Run: systemctl --version
	I1211 23:58:36.597795   28428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:58:36.614035   28428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:58:36.706357   28428 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:58:36.706443   28428 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:58:36.733523   28428 cri.go:89] found id: "bebb2b9dc0db586ac1f726cd9cf64469bb0451cec9c7113a8be04e821596c916"
	I1211 23:58:36.733543   28428 cri.go:89] found id: "3732d2a3f883872d29a8fa95430ad51d25831fcfea4e55c7896a327d96599bcf"
	I1211 23:58:36.733547   28428 cri.go:89] found id: "eb9ef1b7664a78fa036df7cd1e340b260c6103f0043a248e15b8c4f874624584"
	I1211 23:58:36.733551   28428 cri.go:89] found id: "569f61ef928d0f8bcfb0087922cfff40b1d5e27ac23e6c818e3a160cce92903a"
	I1211 23:58:36.733554   28428 cri.go:89] found id: "c3b2706600ae15c248169ff8e9696d286596bb41798aadd3ed7d5f04ddeaeb09"
	I1211 23:58:36.733558   28428 cri.go:89] found id: "664ac9c24ea7c293c526b1eee38f31561cb464c0f1fb15991e83439838893a9f"
	I1211 23:58:36.733560   28428 cri.go:89] found id: "64e96cfc8140e0ea508cbaa4b1f46f6a77ac38ca8dedd5a6ac707b7d8a935013"
	I1211 23:58:36.733563   28428 cri.go:89] found id: "335878b08bc5952d78cfe80774396d448d7e43cfc4df78fe9b41b92ea329360a"
	I1211 23:58:36.733566   28428 cri.go:89] found id: "6cbca1534843d1584cdf90b2c3a08b630e41bed36ba8785830272efa337811be"
	I1211 23:58:36.733572   28428 cri.go:89] found id: "e70723d59b2b179aeebdf126b5c686f98bc9e70ce305f547af722fcfa5f6570b"
	I1211 23:58:36.733575   28428 cri.go:89] found id: "7e0fe81bd1c04f2678ef0211090a7a638b8f1cde3d56e0048932736929dbfa94"
	I1211 23:58:36.733577   28428 cri.go:89] found id: "a69fa017a8d1a4e1e016c291adabf69f06139db6064f64f4696832dec41770b5"
	I1211 23:58:36.733580   28428 cri.go:89] found id: "8e6d8441c0b88b4af7d7ace2ca8455346456cb30cd8105a796edc75453a87f91"
	I1211 23:58:36.733583   28428 cri.go:89] found id: "aeaae182100e69fb94d63381a76545d31a0d9d27e8df794dcf898674b0374925"
	I1211 23:58:36.733586   28428 cri.go:89] found id: "b246671cb7ebd1332e264290b1d14d25b693cac91d9b52fd3540285bf08677f6"
	I1211 23:58:36.733592   28428 cri.go:89] found id: "533b2e5a9c2d28cc131486b31c27bcf25934c32a40a567192995ee18952d1f74"
	I1211 23:58:36.733597   28428 cri.go:89] found id: "138f9c8dcb50c0ecbf735235efdec9c471f54bceb1f758e03d1f66245eb03e52"
	I1211 23:58:36.733602   28428 cri.go:89] found id: "a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e"
	I1211 23:58:36.733604   28428 cri.go:89] found id: "9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98"
	I1211 23:58:36.733607   28428 cri.go:89] found id: "746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7"
	I1211 23:58:36.733612   28428 cri.go:89] found id: "9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc"
	I1211 23:58:36.733618   28428 cri.go:89] found id: "c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69"
	I1211 23:58:36.733621   28428 cri.go:89] found id: "47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e"
	I1211 23:58:36.733624   28428 cri.go:89] found id: "49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47"
	I1211 23:58:36.733627   28428 cri.go:89] found id: "0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead"
	I1211 23:58:36.733630   28428 cri.go:89] found id: ""
	I1211 23:58:36.733673   28428 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 23:58:36.746632   28428 out.go:203] 
	W1211 23:58:36.747876   28428 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1211 23:58:36.747895   28428 out.go:285] * 
	* 
	W1211 23:58:36.750886   28428 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 23:58:36.751970   28428 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-758245 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758245 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (232.238076ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 23:58:36.809306   28489 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:58:36.809597   28489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:36.809606   28489 out.go:374] Setting ErrFile to fd 2...
	I1211 23:58:36.809610   28489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:36.809768   28489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:58:36.810024   28489 mustload.go:66] Loading cluster: addons-758245
	I1211 23:58:36.810324   28489 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:36.810341   28489 addons.go:622] checking whether the cluster is paused
	I1211 23:58:36.810417   28489 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:36.810427   28489 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:58:36.810761   28489 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:58:36.827745   28489 ssh_runner.go:195] Run: systemctl --version
	I1211 23:58:36.827794   28489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:58:36.843512   28489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:58:36.936660   28489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:58:36.936742   28489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:58:36.965529   28489 cri.go:89] found id: "bebb2b9dc0db586ac1f726cd9cf64469bb0451cec9c7113a8be04e821596c916"
	I1211 23:58:36.965552   28489 cri.go:89] found id: "3732d2a3f883872d29a8fa95430ad51d25831fcfea4e55c7896a327d96599bcf"
	I1211 23:58:36.965558   28489 cri.go:89] found id: "eb9ef1b7664a78fa036df7cd1e340b260c6103f0043a248e15b8c4f874624584"
	I1211 23:58:36.965563   28489 cri.go:89] found id: "569f61ef928d0f8bcfb0087922cfff40b1d5e27ac23e6c818e3a160cce92903a"
	I1211 23:58:36.965567   28489 cri.go:89] found id: "c3b2706600ae15c248169ff8e9696d286596bb41798aadd3ed7d5f04ddeaeb09"
	I1211 23:58:36.965573   28489 cri.go:89] found id: "664ac9c24ea7c293c526b1eee38f31561cb464c0f1fb15991e83439838893a9f"
	I1211 23:58:36.965578   28489 cri.go:89] found id: "64e96cfc8140e0ea508cbaa4b1f46f6a77ac38ca8dedd5a6ac707b7d8a935013"
	I1211 23:58:36.965582   28489 cri.go:89] found id: "335878b08bc5952d78cfe80774396d448d7e43cfc4df78fe9b41b92ea329360a"
	I1211 23:58:36.965586   28489 cri.go:89] found id: "6cbca1534843d1584cdf90b2c3a08b630e41bed36ba8785830272efa337811be"
	I1211 23:58:36.965594   28489 cri.go:89] found id: "e70723d59b2b179aeebdf126b5c686f98bc9e70ce305f547af722fcfa5f6570b"
	I1211 23:58:36.965604   28489 cri.go:89] found id: "7e0fe81bd1c04f2678ef0211090a7a638b8f1cde3d56e0048932736929dbfa94"
	I1211 23:58:36.965609   28489 cri.go:89] found id: "a69fa017a8d1a4e1e016c291adabf69f06139db6064f64f4696832dec41770b5"
	I1211 23:58:36.965616   28489 cri.go:89] found id: "8e6d8441c0b88b4af7d7ace2ca8455346456cb30cd8105a796edc75453a87f91"
	I1211 23:58:36.965621   28489 cri.go:89] found id: "aeaae182100e69fb94d63381a76545d31a0d9d27e8df794dcf898674b0374925"
	I1211 23:58:36.965629   28489 cri.go:89] found id: "b246671cb7ebd1332e264290b1d14d25b693cac91d9b52fd3540285bf08677f6"
	I1211 23:58:36.965645   28489 cri.go:89] found id: "533b2e5a9c2d28cc131486b31c27bcf25934c32a40a567192995ee18952d1f74"
	I1211 23:58:36.965653   28489 cri.go:89] found id: "138f9c8dcb50c0ecbf735235efdec9c471f54bceb1f758e03d1f66245eb03e52"
	I1211 23:58:36.965658   28489 cri.go:89] found id: "a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e"
	I1211 23:58:36.965662   28489 cri.go:89] found id: "9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98"
	I1211 23:58:36.965667   28489 cri.go:89] found id: "746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7"
	I1211 23:58:36.965672   28489 cri.go:89] found id: "9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc"
	I1211 23:58:36.965680   28489 cri.go:89] found id: "c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69"
	I1211 23:58:36.965685   28489 cri.go:89] found id: "47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e"
	I1211 23:58:36.965691   28489 cri.go:89] found id: "49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47"
	I1211 23:58:36.965698   28489 cri.go:89] found id: "0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead"
	I1211 23:58:36.965703   28489 cri.go:89] found id: ""
	I1211 23:58:36.965750   28489 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 23:58:36.979367   28489 out.go:203] 
	W1211 23:58:36.980467   28489 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1211 23:58:36.980497   28489 out.go:285] * 
	* 
	W1211 23:58:36.983418   28489 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 23:58:36.984506   28489 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-758245 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (44.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-758245 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-758245 --alsologtostderr -v=1: exit status 11 (257.507355ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 23:57:49.895653   24495 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:57:49.895860   24495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:57:49.895874   24495 out.go:374] Setting ErrFile to fd 2...
	I1211 23:57:49.895882   24495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:57:49.896191   24495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:57:49.896603   24495 mustload.go:66] Loading cluster: addons-758245
	I1211 23:57:49.897095   24495 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:57:49.897129   24495 addons.go:622] checking whether the cluster is paused
	I1211 23:57:49.897271   24495 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:57:49.897292   24495 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:57:49.897999   24495 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:57:49.915797   24495 ssh_runner.go:195] Run: systemctl --version
	I1211 23:57:49.915854   24495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:57:49.933143   24495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:57:50.028710   24495 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:57:50.028799   24495 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:57:50.058064   24495 cri.go:89] found id: "3732d2a3f883872d29a8fa95430ad51d25831fcfea4e55c7896a327d96599bcf"
	I1211 23:57:50.058088   24495 cri.go:89] found id: "eb9ef1b7664a78fa036df7cd1e340b260c6103f0043a248e15b8c4f874624584"
	I1211 23:57:50.058094   24495 cri.go:89] found id: "569f61ef928d0f8bcfb0087922cfff40b1d5e27ac23e6c818e3a160cce92903a"
	I1211 23:57:50.058099   24495 cri.go:89] found id: "c3b2706600ae15c248169ff8e9696d286596bb41798aadd3ed7d5f04ddeaeb09"
	I1211 23:57:50.058103   24495 cri.go:89] found id: "664ac9c24ea7c293c526b1eee38f31561cb464c0f1fb15991e83439838893a9f"
	I1211 23:57:50.058108   24495 cri.go:89] found id: "64e96cfc8140e0ea508cbaa4b1f46f6a77ac38ca8dedd5a6ac707b7d8a935013"
	I1211 23:57:50.058113   24495 cri.go:89] found id: "335878b08bc5952d78cfe80774396d448d7e43cfc4df78fe9b41b92ea329360a"
	I1211 23:57:50.058118   24495 cri.go:89] found id: "6cbca1534843d1584cdf90b2c3a08b630e41bed36ba8785830272efa337811be"
	I1211 23:57:50.058123   24495 cri.go:89] found id: "e70723d59b2b179aeebdf126b5c686f98bc9e70ce305f547af722fcfa5f6570b"
	I1211 23:57:50.058134   24495 cri.go:89] found id: "7e0fe81bd1c04f2678ef0211090a7a638b8f1cde3d56e0048932736929dbfa94"
	I1211 23:57:50.058141   24495 cri.go:89] found id: "a69fa017a8d1a4e1e016c291adabf69f06139db6064f64f4696832dec41770b5"
	I1211 23:57:50.058146   24495 cri.go:89] found id: "8e6d8441c0b88b4af7d7ace2ca8455346456cb30cd8105a796edc75453a87f91"
	I1211 23:57:50.058155   24495 cri.go:89] found id: "aeaae182100e69fb94d63381a76545d31a0d9d27e8df794dcf898674b0374925"
	I1211 23:57:50.058160   24495 cri.go:89] found id: "b246671cb7ebd1332e264290b1d14d25b693cac91d9b52fd3540285bf08677f6"
	I1211 23:57:50.058168   24495 cri.go:89] found id: "533b2e5a9c2d28cc131486b31c27bcf25934c32a40a567192995ee18952d1f74"
	I1211 23:57:50.058179   24495 cri.go:89] found id: "138f9c8dcb50c0ecbf735235efdec9c471f54bceb1f758e03d1f66245eb03e52"
	I1211 23:57:50.058184   24495 cri.go:89] found id: "a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e"
	I1211 23:57:50.058188   24495 cri.go:89] found id: "9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98"
	I1211 23:57:50.058191   24495 cri.go:89] found id: "746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7"
	I1211 23:57:50.058194   24495 cri.go:89] found id: "9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc"
	I1211 23:57:50.058199   24495 cri.go:89] found id: "c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69"
	I1211 23:57:50.058202   24495 cri.go:89] found id: "47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e"
	I1211 23:57:50.058205   24495 cri.go:89] found id: "49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47"
	I1211 23:57:50.058208   24495 cri.go:89] found id: "0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead"
	I1211 23:57:50.058215   24495 cri.go:89] found id: ""
	I1211 23:57:50.058251   24495 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 23:57:50.075063   24495 out.go:203] 
	W1211 23:57:50.076192   24495 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:57:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:57:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1211 23:57:50.076218   24495 out.go:285] * 
	* 
	W1211 23:57:50.080782   24495 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 23:57:50.081931   24495 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-758245 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-758245
helpers_test.go:244: (dbg) docker inspect addons-758245:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "188206e030f11f1c5a3dcad2a126d87ace8d07e7c5574730998ee297f8402c04",
	        "Created": "2025-12-11T23:55:59.936546688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T23:55:59.970959441Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/188206e030f11f1c5a3dcad2a126d87ace8d07e7c5574730998ee297f8402c04/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/188206e030f11f1c5a3dcad2a126d87ace8d07e7c5574730998ee297f8402c04/hostname",
	        "HostsPath": "/var/lib/docker/containers/188206e030f11f1c5a3dcad2a126d87ace8d07e7c5574730998ee297f8402c04/hosts",
	        "LogPath": "/var/lib/docker/containers/188206e030f11f1c5a3dcad2a126d87ace8d07e7c5574730998ee297f8402c04/188206e030f11f1c5a3dcad2a126d87ace8d07e7c5574730998ee297f8402c04-json.log",
	        "Name": "/addons-758245",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-758245:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-758245",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "188206e030f11f1c5a3dcad2a126d87ace8d07e7c5574730998ee297f8402c04",
	                "LowerDir": "/var/lib/docker/overlay2/24ea4780e7b4c26af7d263dbb2b4589d666aed6254f0dc74fdcbd2979e0db87a-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/24ea4780e7b4c26af7d263dbb2b4589d666aed6254f0dc74fdcbd2979e0db87a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/24ea4780e7b4c26af7d263dbb2b4589d666aed6254f0dc74fdcbd2979e0db87a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/24ea4780e7b4c26af7d263dbb2b4589d666aed6254f0dc74fdcbd2979e0db87a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-758245",
	                "Source": "/var/lib/docker/volumes/addons-758245/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-758245",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-758245",
	                "name.minikube.sigs.k8s.io": "addons-758245",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "41c22fc77b15063a4a597947417e13bb5a60f28755aef588fb8ddba38cf6acb6",
	            "SandboxKey": "/var/run/docker/netns/41c22fc77b15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-758245": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8161cef5b05612868a973846e114be2fd7b210990d214b28e9e242051787a510",
	                    "EndpointID": "127e82bcd5295bbd0571f6b12e0ef42de13f73a6d84f53fafcbc0b92b289978e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "c2:56:be:f5:10:4f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-758245",
	                        "188206e030f1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-758245 -n addons-758245
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-758245 logs -n 25: (1.074819s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-628337 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-628337   │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-628337                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-628337   │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ -o=json --download-only -p download-only-422944 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-422944   │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-422944                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-422944   │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ -o=json --download-only -p download-only-712196 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-712196   │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-712196                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-712196   │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-628337                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-628337   │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-422944                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-422944   │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-712196                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-712196   │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ --download-only -p download-docker-646254 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-646254 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ -p download-docker-646254                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-646254 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ --download-only -p binary-mirror-307462 --alsologtostderr --binary-mirror http://127.0.0.1:33495 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-307462   │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ -p binary-mirror-307462                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-307462   │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ addons  │ enable dashboard -p addons-758245                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-758245          │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-758245                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-758245          │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ start   │ -p addons-758245 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-758245          │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:57 UTC │
	│ addons  │ addons-758245 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-758245          │ jenkins │ v1.37.0 │ 11 Dec 25 23:57 UTC │                     │
	│ addons  │ addons-758245 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-758245          │ jenkins │ v1.37.0 │ 11 Dec 25 23:57 UTC │                     │
	│ addons  │ enable headlamp -p addons-758245 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-758245          │ jenkins │ v1.37.0 │ 11 Dec 25 23:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 23:55:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:55:37.451647   16270 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:55:37.451737   16270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:37.451745   16270 out.go:374] Setting ErrFile to fd 2...
	I1211 23:55:37.451749   16270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:37.451957   16270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:55:37.452424   16270 out.go:368] Setting JSON to false
	I1211 23:55:37.453183   16270 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2283,"bootTime":1765495054,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:55:37.453266   16270 start.go:143] virtualization: kvm guest
	I1211 23:55:37.455001   16270 out.go:179] * [addons-758245] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1211 23:55:37.456446   16270 notify.go:221] Checking for updates...
	I1211 23:55:37.456459   16270 out.go:179]   - MINIKUBE_LOCATION=22101
	I1211 23:55:37.457617   16270 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:55:37.458710   16270 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1211 23:55:37.459737   16270 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1211 23:55:37.460752   16270 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1211 23:55:37.461731   16270 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 23:55:37.462744   16270 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 23:55:37.485733   16270 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1211 23:55:37.485817   16270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:55:37.536654   16270 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-11 23:55:37.527785954 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1211 23:55:37.536745   16270 docker.go:319] overlay module found
	I1211 23:55:37.538254   16270 out.go:179] * Using the docker driver based on user configuration
	I1211 23:55:37.539174   16270 start.go:309] selected driver: docker
	I1211 23:55:37.539184   16270 start.go:927] validating driver "docker" against <nil>
	I1211 23:55:37.539195   16270 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 23:55:37.539769   16270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:55:37.587801   16270 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-11 23:55:37.579390861 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1211 23:55:37.587932   16270 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1211 23:55:37.588149   16270 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:55:37.589581   16270 out.go:179] * Using Docker driver with root privileges
	I1211 23:55:37.590674   16270 cni.go:84] Creating CNI manager for ""
	I1211 23:55:37.590747   16270 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 23:55:37.590761   16270 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1211 23:55:37.590829   16270 start.go:353] cluster config:
	{Name:addons-758245 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-758245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1211 23:55:37.591895   16270 out.go:179] * Starting "addons-758245" primary control-plane node in "addons-758245" cluster
	I1211 23:55:37.592757   16270 cache.go:134] Beginning downloading kic base image for docker with crio
	I1211 23:55:37.593740   16270 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1211 23:55:37.594736   16270 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 23:55:37.594762   16270 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1211 23:55:37.594768   16270 cache.go:65] Caching tarball of preloaded images
	I1211 23:55:37.594827   16270 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 23:55:37.594857   16270 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1211 23:55:37.594865   16270 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1211 23:55:37.595204   16270 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/config.json ...
	I1211 23:55:37.595232   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/config.json: {Name:mk6ff817bdab43c8ad5af9ad2a96e675f76a8d11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:55:37.610032   16270 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1211 23:55:37.610121   16270 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1211 23:55:37.610154   16270 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1211 23:55:37.610164   16270 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1211 23:55:37.610188   16270 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1211 23:55:37.610193   16270 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	I1211 23:55:51.379114   16270 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1211 23:55:51.379154   16270 cache.go:243] Successfully downloaded all kic artifacts
	I1211 23:55:51.379191   16270 start.go:360] acquireMachinesLock for addons-758245: {Name:mk3bbf18ce1e2e085e94a157b7afb1e5e505c9fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:55:51.379289   16270 start.go:364] duration metric: took 78.675µs to acquireMachinesLock for "addons-758245"
	I1211 23:55:51.379313   16270 start.go:93] Provisioning new machine with config: &{Name:addons-758245 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-758245 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:55:51.379389   16270 start.go:125] createHost starting for "" (driver="docker")
	I1211 23:55:51.437889   16270 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1211 23:55:51.438139   16270 start.go:159] libmachine.API.Create for "addons-758245" (driver="docker")
	I1211 23:55:51.438168   16270 client.go:173] LocalClient.Create starting
	I1211 23:55:51.438289   16270 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem
	I1211 23:55:51.530498   16270 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem
	I1211 23:55:51.560510   16270 cli_runner.go:164] Run: docker network inspect addons-758245 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1211 23:55:51.578249   16270 cli_runner.go:211] docker network inspect addons-758245 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1211 23:55:51.578312   16270 network_create.go:284] running [docker network inspect addons-758245] to gather additional debugging logs...
	I1211 23:55:51.578335   16270 cli_runner.go:164] Run: docker network inspect addons-758245
	W1211 23:55:51.593493   16270 cli_runner.go:211] docker network inspect addons-758245 returned with exit code 1
	I1211 23:55:51.593520   16270 network_create.go:287] error running [docker network inspect addons-758245]: docker network inspect addons-758245: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-758245 not found
	I1211 23:55:51.593532   16270 network_create.go:289] output of [docker network inspect addons-758245]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-758245 not found
	
	** /stderr **
	I1211 23:55:51.593656   16270 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 23:55:51.609942   16270 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f1e4c0}
	I1211 23:55:51.609985   16270 network_create.go:124] attempt to create docker network addons-758245 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1211 23:55:51.610025   16270 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-758245 addons-758245
	I1211 23:55:51.935647   16270 network_create.go:108] docker network addons-758245 192.168.49.0/24 created
	I1211 23:55:51.935682   16270 kic.go:121] calculated static IP "192.168.49.2" for the "addons-758245" container
	I1211 23:55:51.935764   16270 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1211 23:55:51.951049   16270 cli_runner.go:164] Run: docker volume create addons-758245 --label name.minikube.sigs.k8s.io=addons-758245 --label created_by.minikube.sigs.k8s.io=true
	I1211 23:55:52.063194   16270 oci.go:103] Successfully created a docker volume addons-758245
	I1211 23:55:52.063292   16270 cli_runner.go:164] Run: docker run --rm --name addons-758245-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-758245 --entrypoint /usr/bin/test -v addons-758245:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1211 23:55:56.177708   16270 cli_runner.go:217] Completed: docker run --rm --name addons-758245-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-758245 --entrypoint /usr/bin/test -v addons-758245:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (4.114369659s)
	I1211 23:55:56.177737   16270 oci.go:107] Successfully prepared a docker volume addons-758245
	I1211 23:55:56.177786   16270 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 23:55:56.177797   16270 kic.go:194] Starting extracting preloaded images to volume ...
	I1211 23:55:56.177853   16270 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-758245:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1211 23:55:59.869845   16270 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-758245:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.691931221s)
	I1211 23:55:59.869882   16270 kic.go:203] duration metric: took 3.692080073s to extract preloaded images to volume ...
	W1211 23:55:59.869977   16270 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1211 23:55:59.870010   16270 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1211 23:55:59.870047   16270 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1211 23:55:59.922785   16270 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-758245 --name addons-758245 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-758245 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-758245 --network addons-758245 --ip 192.168.49.2 --volume addons-758245:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1211 23:56:00.197337   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Running}}
	I1211 23:56:00.215120   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:00.232234   16270 cli_runner.go:164] Run: docker exec addons-758245 stat /var/lib/dpkg/alternatives/iptables
	I1211 23:56:00.279932   16270 oci.go:144] the created container "addons-758245" has a running status.
	I1211 23:56:00.279961   16270 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa...
	I1211 23:56:00.311861   16270 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1211 23:56:00.335200   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:00.357620   16270 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1211 23:56:00.357641   16270 kic_runner.go:114] Args: [docker exec --privileged addons-758245 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1211 23:56:00.400772   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:00.420690   16270 machine.go:94] provisionDockerMachine start ...
	I1211 23:56:00.420801   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:00.440380   16270 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:00.440715   16270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1211 23:56:00.440732   16270 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 23:56:00.441910   16270 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50836->127.0.0.1:32768: read: connection reset by peer
	I1211 23:56:03.570490   16270 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-758245
	
	I1211 23:56:03.570519   16270 ubuntu.go:182] provisioning hostname "addons-758245"
	I1211 23:56:03.570572   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:03.588844   16270 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:03.589043   16270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1211 23:56:03.589055   16270 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-758245 && echo "addons-758245" | sudo tee /etc/hostname
	I1211 23:56:03.724529   16270 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-758245
	
	I1211 23:56:03.724610   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:03.740628   16270 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:03.740868   16270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1211 23:56:03.740886   16270 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-758245' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-758245/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-758245' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 23:56:03.869484   16270 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:56:03.869514   16270 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1211 23:56:03.869535   16270 ubuntu.go:190] setting up certificates
	I1211 23:56:03.869546   16270 provision.go:84] configureAuth start
	I1211 23:56:03.869591   16270 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-758245
	I1211 23:56:03.886354   16270 provision.go:143] copyHostCerts
	I1211 23:56:03.886428   16270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1211 23:56:03.886580   16270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1211 23:56:03.886689   16270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1211 23:56:03.886778   16270 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.addons-758245 san=[127.0.0.1 192.168.49.2 addons-758245 localhost minikube]
	I1211 23:56:04.052619   16270 provision.go:177] copyRemoteCerts
	I1211 23:56:04.052685   16270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 23:56:04.052778   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:04.069348   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:04.163588   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 23:56:04.181238   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1211 23:56:04.196615   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1211 23:56:04.211984   16270 provision.go:87] duration metric: took 342.423002ms to configureAuth
	I1211 23:56:04.212006   16270 ubuntu.go:206] setting minikube options for container-runtime
	I1211 23:56:04.212177   16270 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:04.212272   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:04.230349   16270 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:04.230575   16270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1211 23:56:04.230591   16270 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 23:56:04.493501   16270 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 23:56:04.493525   16270 machine.go:97] duration metric: took 4.072809102s to provisionDockerMachine
	I1211 23:56:04.493537   16270 client.go:176] duration metric: took 13.055363495s to LocalClient.Create
	I1211 23:56:04.493555   16270 start.go:167] duration metric: took 13.055418836s to libmachine.API.Create "addons-758245"
	I1211 23:56:04.493563   16270 start.go:293] postStartSetup for "addons-758245" (driver="docker")
	I1211 23:56:04.493571   16270 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 23:56:04.493627   16270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 23:56:04.493664   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:04.510243   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:04.604093   16270 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 23:56:04.607278   16270 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 23:56:04.607306   16270 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1211 23:56:04.607318   16270 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1211 23:56:04.607381   16270 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1211 23:56:04.607407   16270 start.go:296] duration metric: took 113.839582ms for postStartSetup
	I1211 23:56:04.607694   16270 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-758245
	I1211 23:56:04.623853   16270 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/config.json ...
	I1211 23:56:04.624074   16270 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 23:56:04.624115   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:04.639895   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:04.730773   16270 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 23:56:04.735014   16270 start.go:128] duration metric: took 13.355600447s to createHost
	I1211 23:56:04.735048   16270 start.go:83] releasing machines lock for "addons-758245", held for 13.355746998s
	I1211 23:56:04.735126   16270 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-758245
	I1211 23:56:04.751266   16270 ssh_runner.go:195] Run: cat /version.json
	I1211 23:56:04.751307   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:04.751352   16270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 23:56:04.751434   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:04.766533   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:04.768318   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:04.916148   16270 ssh_runner.go:195] Run: systemctl --version
	I1211 23:56:04.921844   16270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 23:56:04.952177   16270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 23:56:04.956147   16270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 23:56:04.956208   16270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 23:56:04.979261   16270 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1211 23:56:04.979278   16270 start.go:496] detecting cgroup driver to use...
	I1211 23:56:04.979326   16270 detect.go:190] detected "systemd" cgroup driver on host os
	I1211 23:56:04.979365   16270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 23:56:04.993119   16270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 23:56:05.003591   16270 docker.go:218] disabling cri-docker service (if available) ...
	I1211 23:56:05.003637   16270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 23:56:05.018234   16270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 23:56:05.033510   16270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 23:56:05.108197   16270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 23:56:05.195829   16270 docker.go:234] disabling docker service ...
	I1211 23:56:05.195895   16270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 23:56:05.212096   16270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 23:56:05.223355   16270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 23:56:05.301517   16270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 23:56:05.376607   16270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 23:56:05.387324   16270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 23:56:05.399723   16270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 23:56:05.399769   16270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:05.408865   16270 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1211 23:56:05.408905   16270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:05.416435   16270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:05.424112   16270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:05.431644   16270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 23:56:05.438612   16270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:05.446039   16270 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:05.457658   16270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:05.465182   16270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 23:56:05.471401   16270 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1211 23:56:05.471452   16270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1211 23:56:05.481990   16270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 23:56:05.489030   16270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:05.564253   16270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 23:56:05.686517   16270 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 23:56:05.686590   16270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 23:56:05.690214   16270 start.go:564] Will wait 60s for crictl version
	I1211 23:56:05.690254   16270 ssh_runner.go:195] Run: which crictl
	I1211 23:56:05.693414   16270 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1211 23:56:05.716545   16270 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1211 23:56:05.716660   16270 ssh_runner.go:195] Run: crio --version
	I1211 23:56:05.742877   16270 ssh_runner.go:195] Run: crio --version
	I1211 23:56:05.769594   16270 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1211 23:56:05.770586   16270 cli_runner.go:164] Run: docker network inspect addons-758245 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 23:56:05.786862   16270 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1211 23:56:05.790397   16270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:56:05.799947   16270 kubeadm.go:884] updating cluster {Name:addons-758245 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-758245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 23:56:05.800055   16270 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 23:56:05.800094   16270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:56:05.826899   16270 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:56:05.826915   16270 crio.go:433] Images already preloaded, skipping extraction
	I1211 23:56:05.826950   16270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:56:05.848845   16270 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:56:05.848861   16270 cache_images.go:86] Images are preloaded, skipping loading
	I1211 23:56:05.848868   16270 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1211 23:56:05.849006   16270 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-758245 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-758245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 23:56:05.849064   16270 ssh_runner.go:195] Run: crio config
	I1211 23:56:05.891990   16270 cni.go:84] Creating CNI manager for ""
	I1211 23:56:05.892010   16270 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 23:56:05.892033   16270 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 23:56:05.892053   16270 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-758245 NodeName:addons-758245 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 23:56:05.892167   16270 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-758245"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 23:56:05.892218   16270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1211 23:56:05.899593   16270 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 23:56:05.899638   16270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 23:56:05.906727   16270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1211 23:56:05.917905   16270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 23:56:05.931368   16270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1211 23:56:05.942457   16270 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1211 23:56:05.945616   16270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:56:05.954258   16270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:06.030403   16270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:56:06.053319   16270 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245 for IP: 192.168.49.2
	I1211 23:56:06.053340   16270 certs.go:195] generating shared ca certs ...
	I1211 23:56:06.053364   16270 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.053507   16270 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1211 23:56:06.083425   16270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt ...
	I1211 23:56:06.083449   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt: {Name:mk465ac978f45c8cfec04be7ca3a8224a830e496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.083602   16270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key ...
	I1211 23:56:06.083614   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key: {Name:mkde3b032e8b8e64d138e20664de71f2523a9c97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.083694   16270 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1211 23:56:06.149272   16270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt ...
	I1211 23:56:06.149298   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt: {Name:mk6aca7ffc58aefc112266ca28b54175cf3a3bf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.149446   16270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key ...
	I1211 23:56:06.149457   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key: {Name:mk9d3ef6dd311fb34aa9dc3f2cd7c88b3c3156ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.149547   16270 certs.go:257] generating profile certs ...
	I1211 23:56:06.149602   16270 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.key
	I1211 23:56:06.149616   16270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt with IP's: []
	I1211 23:56:06.420277   16270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt ...
	I1211 23:56:06.420302   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: {Name:mk30b60689901d46019a4a857b7f031d47f1e73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.420454   16270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.key ...
	I1211 23:56:06.420467   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.key: {Name:mk92f82f3c78de3308df9c171a137a9a32324738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.420547   16270 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.key.d1b5940d
	I1211 23:56:06.420564   16270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.crt.d1b5940d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1211 23:56:06.615486   16270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.crt.d1b5940d ...
	I1211 23:56:06.615510   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.crt.d1b5940d: {Name:mkd104ac8ab443ea34a18dcfb351fb6bf3464aac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.615657   16270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.key.d1b5940d ...
	I1211 23:56:06.615671   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.key.d1b5940d: {Name:mk49326e8f13fe40a6ed83bdb73c63533e26df32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.615740   16270 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.crt.d1b5940d -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.crt
	I1211 23:56:06.615832   16270 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.key.d1b5940d -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.key
	I1211 23:56:06.615888   16270 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/proxy-client.key
	I1211 23:56:06.615906   16270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/proxy-client.crt with IP's: []
	I1211 23:56:06.676044   16270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/proxy-client.crt ...
	I1211 23:56:06.676067   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/proxy-client.crt: {Name:mk12a7644f5e15d4ceb9087be93e635edfc6e5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.676190   16270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/proxy-client.key ...
	I1211 23:56:06.676200   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/proxy-client.key: {Name:mkcac3f890d4ff741f9972cef613e060b3064243 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:06.676363   16270 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1211 23:56:06.676396   16270 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1211 23:56:06.676423   16270 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1211 23:56:06.676447   16270 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1211 23:56:06.677044   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 23:56:06.693713   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1211 23:56:06.709176   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 23:56:06.724589   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1211 23:56:06.740139   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1211 23:56:06.755657   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 23:56:06.770951   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 23:56:06.786361   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 23:56:06.801712   16270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 23:56:06.819090   16270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 23:56:06.831246   16270 ssh_runner.go:195] Run: openssl version
	I1211 23:56:06.837012   16270 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:06.843797   16270 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 23:56:06.852462   16270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:06.855745   16270 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:06.855789   16270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:06.888760   16270 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 23:56:06.895224   16270 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1211 23:56:06.901651   16270 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 23:56:06.904709   16270 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1211 23:56:06.904762   16270 kubeadm.go:401] StartCluster: {Name:addons-758245 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-758245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:56:06.904825   16270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:56:06.904861   16270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:56:06.928866   16270 cri.go:89] found id: ""
	I1211 23:56:06.928916   16270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 23:56:06.935766   16270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 23:56:06.942527   16270 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1211 23:56:06.942562   16270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 23:56:06.949356   16270 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 23:56:06.949388   16270 kubeadm.go:158] found existing configuration files:
	
	I1211 23:56:06.949425   16270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 23:56:06.955963   16270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 23:56:06.956005   16270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 23:56:06.962435   16270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 23:56:06.969001   16270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 23:56:06.969046   16270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 23:56:06.975289   16270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 23:56:06.981925   16270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 23:56:06.981966   16270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 23:56:06.988464   16270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 23:56:06.995095   16270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 23:56:06.995141   16270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 23:56:07.001426   16270 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 23:56:07.033382   16270 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1211 23:56:07.033427   16270 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 23:56:07.051724   16270 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1211 23:56:07.051799   16270 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1211 23:56:07.051866   16270 kubeadm.go:319] OS: Linux
	I1211 23:56:07.051933   16270 kubeadm.go:319] CGROUPS_CPU: enabled
	I1211 23:56:07.052020   16270 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1211 23:56:07.052076   16270 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1211 23:56:07.052121   16270 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1211 23:56:07.052162   16270 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1211 23:56:07.052215   16270 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1211 23:56:07.052259   16270 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1211 23:56:07.052339   16270 kubeadm.go:319] CGROUPS_IO: enabled
	I1211 23:56:07.102744   16270 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 23:56:07.102931   16270 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 23:56:07.103094   16270 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 23:56:07.110373   16270 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 23:56:07.112959   16270 out.go:252]   - Generating certificates and keys ...
	I1211 23:56:07.113058   16270 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 23:56:07.113162   16270 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 23:56:07.465096   16270 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1211 23:56:07.609391   16270 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1211 23:56:07.712529   16270 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1211 23:56:07.940465   16270 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1211 23:56:08.082841   16270 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1211 23:56:08.082969   16270 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-758245 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1211 23:56:08.357074   16270 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1211 23:56:08.357214   16270 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-758245 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1211 23:56:08.765075   16270 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1211 23:56:09.002830   16270 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1211 23:56:09.587636   16270 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1211 23:56:09.587729   16270 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 23:56:09.737337   16270 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 23:56:10.244824   16270 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 23:56:10.444566   16270 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 23:56:10.705252   16270 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 23:56:10.888981   16270 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 23:56:10.889430   16270 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 23:56:10.892790   16270 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 23:56:10.894348   16270 out.go:252]   - Booting up control plane ...
	I1211 23:56:10.894504   16270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 23:56:10.894607   16270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 23:56:10.895023   16270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 23:56:10.908943   16270 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 23:56:10.909100   16270 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 23:56:10.914970   16270 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 23:56:10.915260   16270 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 23:56:10.915330   16270 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 23:56:11.003563   16270 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 23:56:11.003712   16270 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 23:56:11.504843   16270 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.433069ms
	I1211 23:56:11.507574   16270 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1211 23:56:11.507723   16270 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1211 23:56:11.507848   16270 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1211 23:56:11.507943   16270 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1211 23:56:13.253333   16270 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.745637025s
	I1211 23:56:13.466393   16270 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.958803186s
	I1211 23:56:15.009372   16270 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501787403s
	I1211 23:56:15.024351   16270 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 23:56:15.032361   16270 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 23:56:15.039017   16270 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 23:56:15.039272   16270 kubeadm.go:319] [mark-control-plane] Marking the node addons-758245 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 23:56:15.046806   16270 kubeadm.go:319] [bootstrap-token] Using token: dtmtwr.z33wxy23dm2jhz5k
	I1211 23:56:15.047966   16270 out.go:252]   - Configuring RBAC rules ...
	I1211 23:56:15.048060   16270 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 23:56:15.050520   16270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 23:56:15.055728   16270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 23:56:15.057677   16270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 23:56:15.059576   16270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 23:56:15.062133   16270 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 23:56:15.414736   16270 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 23:56:15.827303   16270 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1211 23:56:16.414038   16270 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1211 23:56:16.414848   16270 kubeadm.go:319] 
	I1211 23:56:16.414911   16270 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1211 23:56:16.414941   16270 kubeadm.go:319] 
	I1211 23:56:16.415046   16270 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1211 23:56:16.415056   16270 kubeadm.go:319] 
	I1211 23:56:16.415092   16270 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1211 23:56:16.415154   16270 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 23:56:16.415232   16270 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 23:56:16.415244   16270 kubeadm.go:319] 
	I1211 23:56:16.415325   16270 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1211 23:56:16.415332   16270 kubeadm.go:319] 
	I1211 23:56:16.415392   16270 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 23:56:16.415415   16270 kubeadm.go:319] 
	I1211 23:56:16.415504   16270 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1211 23:56:16.415624   16270 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 23:56:16.415713   16270 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 23:56:16.415735   16270 kubeadm.go:319] 
	I1211 23:56:16.415868   16270 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 23:56:16.415971   16270 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1211 23:56:16.415983   16270 kubeadm.go:319] 
	I1211 23:56:16.416122   16270 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token dtmtwr.z33wxy23dm2jhz5k \
	I1211 23:56:16.416281   16270 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f \
	I1211 23:56:16.416329   16270 kubeadm.go:319] 	--control-plane 
	I1211 23:56:16.416346   16270 kubeadm.go:319] 
	I1211 23:56:16.416438   16270 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1211 23:56:16.416445   16270 kubeadm.go:319] 
	I1211 23:56:16.416577   16270 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token dtmtwr.z33wxy23dm2jhz5k \
	I1211 23:56:16.416731   16270 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f 
	I1211 23:56:16.418635   16270 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1211 23:56:16.418756   16270 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 23:56:16.418791   16270 cni.go:84] Creating CNI manager for ""
	I1211 23:56:16.418804   16270 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 23:56:16.420846   16270 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1211 23:56:16.421821   16270 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1211 23:56:16.425740   16270 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1211 23:56:16.425757   16270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1211 23:56:16.438021   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1211 23:56:16.623327   16270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 23:56:16.623461   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:16.623485   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-758245 minikube.k8s.io/updated_at=2025_12_11T23_56_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=addons-758245 minikube.k8s.io/primary=true
	I1211 23:56:16.633910   16270 ops.go:34] apiserver oom_adj: -16
	I1211 23:56:16.697701   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:17.199228   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:17.698225   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:18.198513   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:18.698748   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:19.198591   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:19.697783   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:20.197996   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:20.698762   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:21.198747   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:21.698659   16270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:21.756636   16270 kubeadm.go:1114] duration metric: took 5.133244409s to wait for elevateKubeSystemPrivileges
	I1211 23:56:21.756673   16270 kubeadm.go:403] duration metric: took 14.85191434s to StartCluster
	I1211 23:56:21.756693   16270 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:21.756804   16270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1211 23:56:21.757254   16270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:21.757427   16270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1211 23:56:21.757462   16270 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:56:21.757527   16270 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1211 23:56:21.757661   16270 addons.go:70] Setting default-storageclass=true in profile "addons-758245"
	I1211 23:56:21.757668   16270 addons.go:70] Setting yakd=true in profile "addons-758245"
	I1211 23:56:21.757683   16270 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:21.757691   16270 addons.go:239] Setting addon yakd=true in "addons-758245"
	I1211 23:56:21.757698   16270 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-758245"
	I1211 23:56:21.757698   16270 addons.go:70] Setting cloud-spanner=true in profile "addons-758245"
	I1211 23:56:21.757725   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.757710   16270 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-758245"
	I1211 23:56:21.757730   16270 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-758245"
	I1211 23:56:21.757748   16270 addons.go:239] Setting addon cloud-spanner=true in "addons-758245"
	I1211 23:56:21.757757   16270 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-758245"
	I1211 23:56:21.757764   16270 addons.go:70] Setting storage-provisioner=true in profile "addons-758245"
	I1211 23:56:21.757786   16270 addons.go:239] Setting addon storage-provisioner=true in "addons-758245"
	I1211 23:56:21.757794   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.757808   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.757809   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.757902   16270 addons.go:70] Setting volcano=true in profile "addons-758245"
	I1211 23:56:21.757922   16270 addons.go:239] Setting addon volcano=true in "addons-758245"
	I1211 23:56:21.757946   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.758053   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.758220   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.758231   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.758262   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.758299   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.758394   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.758410   16270 addons.go:70] Setting volumesnapshots=true in profile "addons-758245"
	I1211 23:56:21.758425   16270 addons.go:239] Setting addon volumesnapshots=true in "addons-758245"
	I1211 23:56:21.758447   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.758884   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.758992   16270 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-758245"
	I1211 23:56:21.759040   16270 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-758245"
	I1211 23:56:21.759066   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.759299   16270 addons.go:70] Setting registry=true in profile "addons-758245"
	I1211 23:56:21.759331   16270 addons.go:239] Setting addon registry=true in "addons-758245"
	I1211 23:56:21.759357   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.759903   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.759421   16270 addons.go:70] Setting registry-creds=true in profile "addons-758245"
	I1211 23:56:21.760463   16270 addons.go:239] Setting addon registry-creds=true in "addons-758245"
	I1211 23:56:21.760524   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.758393   16270 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-758245"
	I1211 23:56:21.760134   16270 addons.go:70] Setting gcp-auth=true in profile "addons-758245"
	I1211 23:56:21.760600   16270 mustload.go:66] Loading cluster: addons-758245
	I1211 23:56:21.760812   16270 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-758245"
	I1211 23:56:21.760175   16270 addons.go:70] Setting ingress=true in profile "addons-758245"
	I1211 23:56:21.760971   16270 addons.go:239] Setting addon ingress=true in "addons-758245"
	I1211 23:56:21.761019   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.761050   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.760185   16270 addons.go:70] Setting ingress-dns=true in profile "addons-758245"
	I1211 23:56:21.761383   16270 addons.go:239] Setting addon ingress-dns=true in "addons-758245"
	I1211 23:56:21.761428   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.760194   16270 addons.go:70] Setting inspektor-gadget=true in profile "addons-758245"
	I1211 23:56:21.761591   16270 addons.go:239] Setting addon inspektor-gadget=true in "addons-758245"
	I1211 23:56:21.761620   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.760825   16270 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:21.761924   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.761967   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.762109   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.764595   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.760204   16270 addons.go:70] Setting metrics-server=true in profile "addons-758245"
	I1211 23:56:21.757749   16270 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-758245"
	I1211 23:56:21.764640   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.764692   16270 out.go:179] * Verifying Kubernetes components...
	I1211 23:56:21.765227   16270 addons.go:239] Setting addon metrics-server=true in "addons-758245"
	I1211 23:56:21.765267   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.766316   16270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:21.770052   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.770278   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.770938   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.771271   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.800610   16270 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1211 23:56:21.801890   16270 out.go:179]   - Using image docker.io/registry:3.0.0
	I1211 23:56:21.803014   16270 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1211 23:56:21.803031   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1211 23:56:21.803094   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.811642   16270 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1211 23:56:21.813286   16270 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1211 23:56:21.813307   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1211 23:56:21.813387   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.832530   16270 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1211 23:56:21.833919   16270 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1211 23:56:21.833943   16270 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1211 23:56:21.834024   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	W1211 23:56:21.834162   16270 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1211 23:56:21.836090   16270 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 23:56:21.838176   16270 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1211 23:56:21.838370   16270 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:56:21.838406   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 23:56:21.838492   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.839293   16270 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1211 23:56:21.839308   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1211 23:56:21.839358   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.849327   16270 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-758245"
	I1211 23:56:21.849384   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.849527   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.849884   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.851012   16270 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1211 23:56:21.852826   16270 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:56:21.852845   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1211 23:56:21.852899   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.854555   16270 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1211 23:56:21.855639   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1211 23:56:21.856847   16270 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:56:21.856862   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1211 23:56:21.856912   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.856990   16270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1211 23:56:21.856999   16270 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1211 23:56:21.857038   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.865494   16270 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1211 23:56:21.865514   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1211 23:56:21.866868   16270 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1211 23:56:21.866886   16270 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1211 23:56:21.866940   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.867328   16270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:21.867561   16270 addons.go:239] Setting addon default-storageclass=true in "addons-758245"
	I1211 23:56:21.868124   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:21.868403   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1211 23:56:21.868640   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:21.872529   16270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:21.872642   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1211 23:56:21.873900   16270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1211 23:56:21.875072   16270 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:56:21.875090   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1211 23:56:21.875146   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.875293   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1211 23:56:21.878205   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1211 23:56:21.879443   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1211 23:56:21.880629   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1211 23:56:21.884439   16270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1211 23:56:21.884777   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.884936   16270 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1211 23:56:21.887576   16270 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:56:21.887595   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1211 23:56:21.887630   16270 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1211 23:56:21.887648   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.888386   16270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1211 23:56:21.888401   16270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1211 23:56:21.888456   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.888799   16270 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:56:21.888812   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1211 23:56:21.888877   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.904470   16270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1211 23:56:21.911425   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.921582   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.925110   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.927185   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.941915   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.944410   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.949552   16270 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 23:56:21.949739   16270 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 23:56:21.950290   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.951097   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.956947   16270 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1211 23:56:21.958523   16270 out.go:179]   - Using image docker.io/busybox:stable
	I1211 23:56:21.958679   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.959943   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.959961   16270 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:56:21.959975   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1211 23:56:21.960028   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:21.961449   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.962564   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.982948   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.988369   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:21.993461   16270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:56:21.997936   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:22.068794   16270 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1211 23:56:22.068818   16270 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1211 23:56:22.088074   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:56:22.088681   16270 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:56:22.088697   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1211 23:56:22.095442   16270 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1211 23:56:22.095460   16270 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1211 23:56:22.102110   16270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1211 23:56:22.102132   16270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1211 23:56:22.107181   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:56:22.114271   16270 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1211 23:56:22.114301   16270 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1211 23:56:22.121243   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1211 23:56:22.135083   16270 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1211 23:56:22.135112   16270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1211 23:56:22.136292   16270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1211 23:56:22.136315   16270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1211 23:56:22.145905   16270 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1211 23:56:22.145923   16270 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1211 23:56:22.146889   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1211 23:56:22.153994   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:56:22.155984   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:56:22.157068   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:56:22.157769   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:56:22.161088   16270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1211 23:56:22.161113   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1211 23:56:22.163041   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:56:22.169974   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 23:56:22.176165   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:56:22.198968   16270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1211 23:56:22.199072   16270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1211 23:56:22.207187   16270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1211 23:56:22.207208   16270 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1211 23:56:22.207344   16270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1211 23:56:22.207355   16270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1211 23:56:22.212168   16270 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:56:22.212186   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1211 23:56:22.255468   16270 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1211 23:56:22.255510   16270 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1211 23:56:22.267654   16270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:56:22.267675   16270 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1211 23:56:22.278237   16270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1211 23:56:22.278548   16270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1211 23:56:22.278878   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:56:22.307630   16270 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:22.307652   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1211 23:56:22.320784   16270 node_ready.go:35] waiting up to 6m0s for node "addons-758245" to be "Ready" ...
	I1211 23:56:22.321059   16270 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1211 23:56:22.338647   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:56:22.350839   16270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1211 23:56:22.350866   16270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1211 23:56:22.358465   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:22.396657   16270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1211 23:56:22.396691   16270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1211 23:56:22.454085   16270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1211 23:56:22.454109   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1211 23:56:22.512196   16270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1211 23:56:22.512226   16270 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1211 23:56:22.566912   16270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1211 23:56:22.566943   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1211 23:56:22.624831   16270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1211 23:56:22.624860   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1211 23:56:22.653977   16270 addons.go:495] Verifying addon registry=true in "addons-758245"
	I1211 23:56:22.656215   16270 out.go:179] * Verifying registry addon...
	I1211 23:56:22.658132   16270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1211 23:56:22.667888   16270 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1211 23:56:22.667922   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:22.676364   16270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:56:22.676385   16270 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1211 23:56:22.720418   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:56:22.826375   16270 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-758245" context rescaled to 1 replicas
	I1211 23:56:23.161982   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:23.276559   16270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.120541763s)
	I1211 23:56:23.276599   16270 addons.go:495] Verifying addon ingress=true in "addons-758245"
	I1211 23:56:23.276670   16270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.118873305s)
	I1211 23:56:23.276737   16270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.119644481s)
	I1211 23:56:23.276808   16270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.113747439s)
	I1211 23:56:23.276862   16270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.106862228s)
	I1211 23:56:23.276957   16270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.100719304s)
	I1211 23:56:23.277104   16270 addons.go:495] Verifying addon metrics-server=true in "addons-758245"
	I1211 23:56:23.278005   16270 out.go:179] * Verifying ingress addon...
	I1211 23:56:23.278653   16270 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-758245 service yakd-dashboard -n yakd-dashboard
	
	I1211 23:56:23.280712   16270 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1211 23:56:23.282978   16270 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	W1211 23:56:23.283065   16270 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1211 23:56:23.661313   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:23.672904   16270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.314380478s)
	W1211 23:56:23.672957   16270 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:56:23.672986   16270 retry.go:31] will retry after 231.048662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:56:23.673098   16270 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-758245"
	I1211 23:56:23.674636   16270 out.go:179] * Verifying csi-hostpath-driver addon...
	I1211 23:56:23.676627   16270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1211 23:56:23.679757   16270 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1211 23:56:23.679782   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:23.784138   16270 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1211 23:56:23.784158   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:23.904155   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:24.160751   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:24.179356   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:24.283383   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:24.322929   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:24.661012   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:24.678689   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:24.783628   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:25.160870   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:25.178725   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:25.283430   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:25.661127   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:25.679175   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:25.783353   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:26.160345   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:26.178759   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:26.283688   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:26.323739   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:26.346718   16270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.442522654s)
	I1211 23:56:26.660458   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:26.679188   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:26.783212   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:27.160610   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:27.179274   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:27.283259   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:27.660441   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:27.678987   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:27.783316   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:28.160981   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:28.178688   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:28.283708   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:28.323876   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:28.660822   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:28.678410   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:28.783602   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:29.160853   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:29.178549   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:29.283525   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:29.459546   16270 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1211 23:56:29.459621   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:29.476826   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:29.575268   16270 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1211 23:56:29.586459   16270 addons.go:239] Setting addon gcp-auth=true in "addons-758245"
	I1211 23:56:29.586509   16270 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:56:29.586820   16270 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:56:29.603327   16270 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1211 23:56:29.603376   16270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:56:29.618533   16270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:56:29.661697   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:29.678517   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:29.709769   16270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:29.710920   16270 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1211 23:56:29.711946   16270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1211 23:56:29.711958   16270 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1211 23:56:29.724074   16270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1211 23:56:29.724088   16270 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1211 23:56:29.735650   16270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:56:29.735669   16270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1211 23:56:29.747529   16270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:56:29.783844   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:30.033393   16270 addons.go:495] Verifying addon gcp-auth=true in "addons-758245"
	I1211 23:56:30.034712   16270 out.go:179] * Verifying gcp-auth addon...
	I1211 23:56:30.036444   16270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1211 23:56:30.038506   16270 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1211 23:56:30.038533   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:30.160705   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:30.179540   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:30.283610   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:30.538547   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:30.668753   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:30.679159   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:30.783262   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:30.823026   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:31.039589   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:31.160796   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:31.178619   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:31.283823   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:31.538941   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:31.661044   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:31.678980   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:31.782990   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:32.039267   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:32.160268   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:32.179033   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:32.283138   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:32.539110   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:32.661330   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:32.679028   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:32.783089   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:33.039330   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:33.160348   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:33.179164   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:33.283384   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:33.323054   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:33.539377   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:33.660679   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:33.679385   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:33.783655   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:34.038647   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:34.160636   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:34.179318   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:34.283330   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:34.539557   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:34.660816   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:34.678516   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:34.783354   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:35.039577   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:35.160932   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:35.178765   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:35.282798   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:35.323454   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:35.538659   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:35.661096   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:35.679008   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:35.783393   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:36.039926   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:36.161108   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:36.178845   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:36.283182   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:36.539224   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:36.660440   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:36.679190   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:36.783264   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:37.039682   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:37.160690   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:37.179441   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:37.283874   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:37.323606   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:37.539089   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:37.661221   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:37.679081   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:37.783280   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:38.039456   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:38.160702   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:38.179398   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:38.283546   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:38.539428   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:38.660700   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:38.678357   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:38.783454   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:39.039793   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:39.160791   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:39.178637   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:39.283909   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:39.539015   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:39.661153   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:39.679045   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:39.783187   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:39.822935   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:40.039253   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:40.160514   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:40.179389   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:40.283388   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:40.539562   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:40.660787   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:40.678611   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:40.783625   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:41.039819   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:41.161336   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:41.179192   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:41.283290   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:41.539858   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:41.661163   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:41.679142   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:41.783468   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:41.823160   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:42.039666   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:42.160796   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:42.178644   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:42.283686   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:42.538826   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:42.661023   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:42.678777   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:42.782777   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:43.038877   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:43.161056   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:43.178935   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:43.283127   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:43.539110   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:43.661450   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:43.679377   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:43.783701   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:43.823671   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:44.039191   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:44.161210   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:44.178983   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:44.283126   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:44.539235   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:44.660422   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:44.679182   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:44.783248   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:45.039174   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:45.160186   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:45.178994   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:45.283018   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:45.538929   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:45.660970   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:45.678870   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:45.783029   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:46.039364   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:46.160462   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:46.179263   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:46.283346   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:46.323238   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:46.540065   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:46.661211   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:46.678941   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:46.782973   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:47.039295   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:47.160512   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:47.179300   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:47.283789   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:47.539103   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:47.661308   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:47.679227   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:47.783612   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:48.038561   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:48.160756   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:48.178427   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:48.283509   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:48.539560   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:48.660584   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:48.679449   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:48.783375   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:48.823077   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:49.039362   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:49.160293   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:49.179047   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:49.283310   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:49.539616   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:49.660652   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:49.679720   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:49.783916   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:50.038922   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:50.161000   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:50.178861   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:50.283193   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:50.539101   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:50.661352   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:50.679291   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:50.783307   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:51.039392   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:51.160384   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:51.179128   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:51.283051   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:51.322591   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:51.538828   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:51.661123   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:51.678983   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:51.783331   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:52.039739   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:52.160762   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:52.178451   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:52.283650   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:52.539199   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:52.660400   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:52.679379   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:52.783515   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:53.039508   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:53.160814   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:53.178691   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:53.283785   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:53.323732   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:53.538956   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:53.660975   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:53.678851   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:53.783161   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:54.039253   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:54.160169   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:54.178939   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:54.283152   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:54.539304   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:54.660374   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:54.679266   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:54.783406   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:55.039221   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:55.160277   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:55.179159   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:55.283422   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:55.539390   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:55.660639   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:55.679709   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:55.783915   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:55.822651   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:56.039125   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:56.161231   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:56.179030   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:56.283110   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:56.539093   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:56.661195   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:56.680068   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:56.783201   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:57.039057   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:57.161074   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:57.178883   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:57.283348   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:57.539688   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:57.660894   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:57.678826   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:57.782862   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:56:57.823919   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:56:58.039297   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:58.160264   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:58.179096   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:58.283198   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:58.538889   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:58.660936   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:58.678775   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:58.782690   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:59.038592   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:59.160469   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:59.179245   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:59.283372   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:59.539427   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:59.660698   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:59.678452   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:59.783578   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:00.038663   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:00.160722   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:00.179469   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:00.283642   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1211 23:57:00.323311   16270 node_ready.go:57] node "addons-758245" has "Ready":"False" status (will retry)
	I1211 23:57:00.539542   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:00.660544   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:00.679313   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:00.783374   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:01.039364   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:01.160543   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:01.179401   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:01.283533   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:01.538684   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:01.660758   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:01.678838   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:01.782886   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:02.039186   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:02.160397   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:02.179426   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:02.283932   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:02.539074   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:02.661355   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:02.679734   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:02.782707   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:02.823466   16270 node_ready.go:49] node "addons-758245" is "Ready"
	I1211 23:57:02.823515   16270 node_ready.go:38] duration metric: took 40.502700107s for node "addons-758245" to be "Ready" ...
	I1211 23:57:02.823534   16270 api_server.go:52] waiting for apiserver process to appear ...
	I1211 23:57:02.823594   16270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 23:57:02.839929   16270 api_server.go:72] duration metric: took 41.082419196s to wait for apiserver process to appear ...
	I1211 23:57:02.839954   16270 api_server.go:88] waiting for apiserver healthz status ...
	I1211 23:57:02.839978   16270 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1211 23:57:02.845343   16270 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1211 23:57:02.846396   16270 api_server.go:141] control plane version: v1.34.2
	I1211 23:57:02.846424   16270 api_server.go:131] duration metric: took 6.461954ms to wait for apiserver health ...
	I1211 23:57:02.846436   16270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1211 23:57:02.855158   16270 system_pods.go:59] 20 kube-system pods found
	I1211 23:57:02.855196   16270 system_pods.go:61] "amd-gpu-device-plugin-t4nwx" [c2c64d04-3068-46a6-9981-66ab6ada3e01] Pending
	I1211 23:57:02.855208   16270 system_pods.go:61] "coredns-66bc5c9577-xxwm5" [de02078f-e23e-4ca9-91e5-0f424f00cee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1211 23:57:02.855217   16270 system_pods.go:61] "csi-hostpath-attacher-0" [c7bae797-cf53-4b9c-97c3-0b0e891df4b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:57:02.855229   16270 system_pods.go:61] "csi-hostpath-resizer-0" [718b3422-383a-4f39-a77c-14135e707c4e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1211 23:57:02.855235   16270 system_pods.go:61] "csi-hostpathplugin-5nn2t" [dd0d3f24-7210-49df-84b7-95e27387db16] Pending
	I1211 23:57:02.855240   16270 system_pods.go:61] "etcd-addons-758245" [35d2e8f3-6273-42f6-b0b6-69593941932e] Running
	I1211 23:57:02.855244   16270 system_pods.go:61] "kindnet-vctlp" [e3d24177-0c80-4919-b891-7f26f355f0f1] Running
	I1211 23:57:02.855251   16270 system_pods.go:61] "kube-apiserver-addons-758245" [24efbe93-266c-488f-9474-c8ebd3a90385] Running
	I1211 23:57:02.855256   16270 system_pods.go:61] "kube-controller-manager-addons-758245" [0eb5a6aa-9ac9-4b60-a5e5-58078366cd04] Running
	I1211 23:57:02.855264   16270 system_pods.go:61] "kube-ingress-dns-minikube" [ceaf163d-b482-416b-99af-e0419ec5a9a5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:57:02.855269   16270 system_pods.go:61] "kube-proxy-2ldz5" [aa397dad-1532-47a5-9c67-d54a6c33f5c9] Running
	I1211 23:57:02.855274   16270 system_pods.go:61] "kube-scheduler-addons-758245" [586dd9ce-332f-4ace-a65f-ef7bf74d1e0b] Running
	I1211 23:57:02.855281   16270 system_pods.go:61] "metrics-server-85b7d694d7-lzpx2" [b6634516-aafd-439b-aa29-2feb3678dca5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:57:02.855290   16270 system_pods.go:61] "nvidia-device-plugin-daemonset-5r9hw" [cf07904f-5cff-48d7-a33a-06925d2e0b55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:57:02.855305   16270 system_pods.go:61] "registry-6b586f9694-ctpbw" [8f86a5b8-2ea9-4839-9462-cfe7303189b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:57:02.855313   16270 system_pods.go:61] "registry-creds-764b6fb674-rkslf" [d7a05cdd-171e-4999-9c92-9696f6943f0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:57:02.855321   16270 system_pods.go:61] "registry-proxy-9n7hj" [1f9bf5f3-1436-4f37-97c7-503e1b285750] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:57:02.855328   16270 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4vpl2" [4fa25eb3-c777-444e-8df2-1f82dd6b89b4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:02.855337   16270 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tgsc4" [c54cb689-98d5-4b4c-aef9-5a82bff58c3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:02.855346   16270 system_pods.go:61] "storage-provisioner" [fbe17644-ac82-4cd1-ba81-92edc1aa060f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1211 23:57:02.855353   16270 system_pods.go:74] duration metric: took 8.910426ms to wait for pod list to return data ...
	I1211 23:57:02.855362   16270 default_sa.go:34] waiting for default service account to be created ...
	I1211 23:57:02.863741   16270 default_sa.go:45] found service account: "default"
	I1211 23:57:02.863767   16270 default_sa.go:55] duration metric: took 8.398261ms for default service account to be created ...
	I1211 23:57:02.863777   16270 system_pods.go:116] waiting for k8s-apps to be running ...
	I1211 23:57:02.869797   16270 system_pods.go:86] 20 kube-system pods found
	I1211 23:57:02.869832   16270 system_pods.go:89] "amd-gpu-device-plugin-t4nwx" [c2c64d04-3068-46a6-9981-66ab6ada3e01] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:57:02.869842   16270 system_pods.go:89] "coredns-66bc5c9577-xxwm5" [de02078f-e23e-4ca9-91e5-0f424f00cee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1211 23:57:02.869852   16270 system_pods.go:89] "csi-hostpath-attacher-0" [c7bae797-cf53-4b9c-97c3-0b0e891df4b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:57:02.869861   16270 system_pods.go:89] "csi-hostpath-resizer-0" [718b3422-383a-4f39-a77c-14135e707c4e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1211 23:57:02.869876   16270 system_pods.go:89] "csi-hostpathplugin-5nn2t" [dd0d3f24-7210-49df-84b7-95e27387db16] Pending
	I1211 23:57:02.869882   16270 system_pods.go:89] "etcd-addons-758245" [35d2e8f3-6273-42f6-b0b6-69593941932e] Running
	I1211 23:57:02.869894   16270 system_pods.go:89] "kindnet-vctlp" [e3d24177-0c80-4919-b891-7f26f355f0f1] Running
	I1211 23:57:02.869900   16270 system_pods.go:89] "kube-apiserver-addons-758245" [24efbe93-266c-488f-9474-c8ebd3a90385] Running
	I1211 23:57:02.869912   16270 system_pods.go:89] "kube-controller-manager-addons-758245" [0eb5a6aa-9ac9-4b60-a5e5-58078366cd04] Running
	I1211 23:57:02.869920   16270 system_pods.go:89] "kube-ingress-dns-minikube" [ceaf163d-b482-416b-99af-e0419ec5a9a5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:57:02.869933   16270 system_pods.go:89] "kube-proxy-2ldz5" [aa397dad-1532-47a5-9c67-d54a6c33f5c9] Running
	I1211 23:57:02.869939   16270 system_pods.go:89] "kube-scheduler-addons-758245" [586dd9ce-332f-4ace-a65f-ef7bf74d1e0b] Running
	I1211 23:57:02.869947   16270 system_pods.go:89] "metrics-server-85b7d694d7-lzpx2" [b6634516-aafd-439b-aa29-2feb3678dca5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:57:02.869961   16270 system_pods.go:89] "nvidia-device-plugin-daemonset-5r9hw" [cf07904f-5cff-48d7-a33a-06925d2e0b55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:57:02.869968   16270 system_pods.go:89] "registry-6b586f9694-ctpbw" [8f86a5b8-2ea9-4839-9462-cfe7303189b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:57:02.869975   16270 system_pods.go:89] "registry-creds-764b6fb674-rkslf" [d7a05cdd-171e-4999-9c92-9696f6943f0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:57:02.869983   16270 system_pods.go:89] "registry-proxy-9n7hj" [1f9bf5f3-1436-4f37-97c7-503e1b285750] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:57:02.869991   16270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4vpl2" [4fa25eb3-c777-444e-8df2-1f82dd6b89b4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:02.869999   16270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tgsc4" [c54cb689-98d5-4b4c-aef9-5a82bff58c3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:02.870006   16270 system_pods.go:89] "storage-provisioner" [fbe17644-ac82-4cd1-ba81-92edc1aa060f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1211 23:57:02.870024   16270 retry.go:31] will retry after 253.885498ms: missing components: kube-dns
	I1211 23:57:03.039435   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:03.142071   16270 system_pods.go:86] 20 kube-system pods found
	I1211 23:57:03.142112   16270 system_pods.go:89] "amd-gpu-device-plugin-t4nwx" [c2c64d04-3068-46a6-9981-66ab6ada3e01] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:57:03.142123   16270 system_pods.go:89] "coredns-66bc5c9577-xxwm5" [de02078f-e23e-4ca9-91e5-0f424f00cee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1211 23:57:03.142133   16270 system_pods.go:89] "csi-hostpath-attacher-0" [c7bae797-cf53-4b9c-97c3-0b0e891df4b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:57:03.142142   16270 system_pods.go:89] "csi-hostpath-resizer-0" [718b3422-383a-4f39-a77c-14135e707c4e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1211 23:57:03.142151   16270 system_pods.go:89] "csi-hostpathplugin-5nn2t" [dd0d3f24-7210-49df-84b7-95e27387db16] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:57:03.142160   16270 system_pods.go:89] "etcd-addons-758245" [35d2e8f3-6273-42f6-b0b6-69593941932e] Running
	I1211 23:57:03.142167   16270 system_pods.go:89] "kindnet-vctlp" [e3d24177-0c80-4919-b891-7f26f355f0f1] Running
	I1211 23:57:03.142173   16270 system_pods.go:89] "kube-apiserver-addons-758245" [24efbe93-266c-488f-9474-c8ebd3a90385] Running
	I1211 23:57:03.142178   16270 system_pods.go:89] "kube-controller-manager-addons-758245" [0eb5a6aa-9ac9-4b60-a5e5-58078366cd04] Running
	I1211 23:57:03.142186   16270 system_pods.go:89] "kube-ingress-dns-minikube" [ceaf163d-b482-416b-99af-e0419ec5a9a5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:57:03.142191   16270 system_pods.go:89] "kube-proxy-2ldz5" [aa397dad-1532-47a5-9c67-d54a6c33f5c9] Running
	I1211 23:57:03.142197   16270 system_pods.go:89] "kube-scheduler-addons-758245" [586dd9ce-332f-4ace-a65f-ef7bf74d1e0b] Running
	I1211 23:57:03.142204   16270 system_pods.go:89] "metrics-server-85b7d694d7-lzpx2" [b6634516-aafd-439b-aa29-2feb3678dca5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:57:03.142214   16270 system_pods.go:89] "nvidia-device-plugin-daemonset-5r9hw" [cf07904f-5cff-48d7-a33a-06925d2e0b55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:57:03.142224   16270 system_pods.go:89] "registry-6b586f9694-ctpbw" [8f86a5b8-2ea9-4839-9462-cfe7303189b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:57:03.142232   16270 system_pods.go:89] "registry-creds-764b6fb674-rkslf" [d7a05cdd-171e-4999-9c92-9696f6943f0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:57:03.142240   16270 system_pods.go:89] "registry-proxy-9n7hj" [1f9bf5f3-1436-4f37-97c7-503e1b285750] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:57:03.142250   16270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4vpl2" [4fa25eb3-c777-444e-8df2-1f82dd6b89b4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:03.142258   16270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tgsc4" [c54cb689-98d5-4b4c-aef9-5a82bff58c3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:03.142265   16270 system_pods.go:89] "storage-provisioner" [fbe17644-ac82-4cd1-ba81-92edc1aa060f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1211 23:57:03.142284   16270 retry.go:31] will retry after 385.118569ms: missing components: kube-dns
	I1211 23:57:03.240370   16270 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1211 23:57:03.240393   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:03.240599   16270 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1211 23:57:03.240619   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:03.283327   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:03.532664   16270 system_pods.go:86] 20 kube-system pods found
	I1211 23:57:03.532697   16270 system_pods.go:89] "amd-gpu-device-plugin-t4nwx" [c2c64d04-3068-46a6-9981-66ab6ada3e01] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:57:03.532705   16270 system_pods.go:89] "coredns-66bc5c9577-xxwm5" [de02078f-e23e-4ca9-91e5-0f424f00cee3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1211 23:57:03.532712   16270 system_pods.go:89] "csi-hostpath-attacher-0" [c7bae797-cf53-4b9c-97c3-0b0e891df4b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:57:03.532717   16270 system_pods.go:89] "csi-hostpath-resizer-0" [718b3422-383a-4f39-a77c-14135e707c4e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1211 23:57:03.532723   16270 system_pods.go:89] "csi-hostpathplugin-5nn2t" [dd0d3f24-7210-49df-84b7-95e27387db16] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:57:03.532727   16270 system_pods.go:89] "etcd-addons-758245" [35d2e8f3-6273-42f6-b0b6-69593941932e] Running
	I1211 23:57:03.532731   16270 system_pods.go:89] "kindnet-vctlp" [e3d24177-0c80-4919-b891-7f26f355f0f1] Running
	I1211 23:57:03.532734   16270 system_pods.go:89] "kube-apiserver-addons-758245" [24efbe93-266c-488f-9474-c8ebd3a90385] Running
	I1211 23:57:03.532738   16270 system_pods.go:89] "kube-controller-manager-addons-758245" [0eb5a6aa-9ac9-4b60-a5e5-58078366cd04] Running
	I1211 23:57:03.532743   16270 system_pods.go:89] "kube-ingress-dns-minikube" [ceaf163d-b482-416b-99af-e0419ec5a9a5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:57:03.532750   16270 system_pods.go:89] "kube-proxy-2ldz5" [aa397dad-1532-47a5-9c67-d54a6c33f5c9] Running
	I1211 23:57:03.532754   16270 system_pods.go:89] "kube-scheduler-addons-758245" [586dd9ce-332f-4ace-a65f-ef7bf74d1e0b] Running
	I1211 23:57:03.532760   16270 system_pods.go:89] "metrics-server-85b7d694d7-lzpx2" [b6634516-aafd-439b-aa29-2feb3678dca5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:57:03.532768   16270 system_pods.go:89] "nvidia-device-plugin-daemonset-5r9hw" [cf07904f-5cff-48d7-a33a-06925d2e0b55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:57:03.532773   16270 system_pods.go:89] "registry-6b586f9694-ctpbw" [8f86a5b8-2ea9-4839-9462-cfe7303189b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:57:03.532778   16270 system_pods.go:89] "registry-creds-764b6fb674-rkslf" [d7a05cdd-171e-4999-9c92-9696f6943f0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:57:03.532782   16270 system_pods.go:89] "registry-proxy-9n7hj" [1f9bf5f3-1436-4f37-97c7-503e1b285750] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:57:03.532790   16270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4vpl2" [4fa25eb3-c777-444e-8df2-1f82dd6b89b4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:03.532795   16270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tgsc4" [c54cb689-98d5-4b4c-aef9-5a82bff58c3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:03.532801   16270 system_pods.go:89] "storage-provisioner" [fbe17644-ac82-4cd1-ba81-92edc1aa060f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1211 23:57:03.532819   16270 retry.go:31] will retry after 364.851391ms: missing components: kube-dns
	I1211 23:57:03.538725   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:03.660974   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:03.679090   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:03.786037   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:03.901572   16270 system_pods.go:86] 20 kube-system pods found
	I1211 23:57:03.901603   16270 system_pods.go:89] "amd-gpu-device-plugin-t4nwx" [c2c64d04-3068-46a6-9981-66ab6ada3e01] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:57:03.901609   16270 system_pods.go:89] "coredns-66bc5c9577-xxwm5" [de02078f-e23e-4ca9-91e5-0f424f00cee3] Running
	I1211 23:57:03.901617   16270 system_pods.go:89] "csi-hostpath-attacher-0" [c7bae797-cf53-4b9c-97c3-0b0e891df4b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:57:03.901623   16270 system_pods.go:89] "csi-hostpath-resizer-0" [718b3422-383a-4f39-a77c-14135e707c4e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1211 23:57:03.901628   16270 system_pods.go:89] "csi-hostpathplugin-5nn2t" [dd0d3f24-7210-49df-84b7-95e27387db16] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:57:03.901633   16270 system_pods.go:89] "etcd-addons-758245" [35d2e8f3-6273-42f6-b0b6-69593941932e] Running
	I1211 23:57:03.901637   16270 system_pods.go:89] "kindnet-vctlp" [e3d24177-0c80-4919-b891-7f26f355f0f1] Running
	I1211 23:57:03.901641   16270 system_pods.go:89] "kube-apiserver-addons-758245" [24efbe93-266c-488f-9474-c8ebd3a90385] Running
	I1211 23:57:03.901647   16270 system_pods.go:89] "kube-controller-manager-addons-758245" [0eb5a6aa-9ac9-4b60-a5e5-58078366cd04] Running
	I1211 23:57:03.901653   16270 system_pods.go:89] "kube-ingress-dns-minikube" [ceaf163d-b482-416b-99af-e0419ec5a9a5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:57:03.901658   16270 system_pods.go:89] "kube-proxy-2ldz5" [aa397dad-1532-47a5-9c67-d54a6c33f5c9] Running
	I1211 23:57:03.901661   16270 system_pods.go:89] "kube-scheduler-addons-758245" [586dd9ce-332f-4ace-a65f-ef7bf74d1e0b] Running
	I1211 23:57:03.901666   16270 system_pods.go:89] "metrics-server-85b7d694d7-lzpx2" [b6634516-aafd-439b-aa29-2feb3678dca5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:57:03.901673   16270 system_pods.go:89] "nvidia-device-plugin-daemonset-5r9hw" [cf07904f-5cff-48d7-a33a-06925d2e0b55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:57:03.901678   16270 system_pods.go:89] "registry-6b586f9694-ctpbw" [8f86a5b8-2ea9-4839-9462-cfe7303189b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:57:03.901721   16270 system_pods.go:89] "registry-creds-764b6fb674-rkslf" [d7a05cdd-171e-4999-9c92-9696f6943f0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:57:03.901732   16270 system_pods.go:89] "registry-proxy-9n7hj" [1f9bf5f3-1436-4f37-97c7-503e1b285750] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:57:03.901738   16270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4vpl2" [4fa25eb3-c777-444e-8df2-1f82dd6b89b4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:03.901745   16270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tgsc4" [c54cb689-98d5-4b4c-aef9-5a82bff58c3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:57:03.901750   16270 system_pods.go:89] "storage-provisioner" [fbe17644-ac82-4cd1-ba81-92edc1aa060f] Running
	I1211 23:57:03.901757   16270 system_pods.go:126] duration metric: took 1.037972891s to wait for k8s-apps to be running ...
	I1211 23:57:03.901766   16270 system_svc.go:44] waiting for kubelet service to be running ....
	I1211 23:57:03.901809   16270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 23:57:03.914188   16270 system_svc.go:56] duration metric: took 12.412668ms WaitForService to wait for kubelet
	I1211 23:57:03.914215   16270 kubeadm.go:587] duration metric: took 42.15670987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:57:03.914233   16270 node_conditions.go:102] verifying NodePressure condition ...
	I1211 23:57:03.916372   16270 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1211 23:57:03.916394   16270 node_conditions.go:123] node cpu capacity is 8
	I1211 23:57:03.916408   16270 node_conditions.go:105] duration metric: took 2.171236ms to run NodePressure ...
	I1211 23:57:03.916421   16270 start.go:242] waiting for startup goroutines ...
	I1211 23:57:04.040201   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:04.162073   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:04.180252   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:04.283990   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:04.540615   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:04.661030   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:04.679007   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:04.783106   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:05.040215   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:05.161523   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:05.262936   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:05.284887   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:05.539709   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:05.661487   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:05.680147   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:05.783959   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:06.039791   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:06.161559   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:06.179968   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:06.283703   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:06.539597   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:06.661253   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:06.679403   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:06.783813   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:07.039616   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:07.160889   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:07.178966   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:07.283418   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:07.540410   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:07.661510   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:07.680522   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:07.785059   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:08.040333   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:08.160622   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:08.179915   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:08.283601   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:08.539227   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:08.661165   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:08.679747   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:08.784391   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:09.039047   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:09.161678   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:09.180346   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:09.290735   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:09.539239   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:09.660928   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:09.679558   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:09.784397   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:10.039590   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:10.160960   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:10.179146   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:10.283673   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:10.539560   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:10.661192   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:10.679915   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:10.783839   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:11.039776   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:11.161659   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:11.180457   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:11.284694   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:11.539432   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:11.660671   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:11.679950   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:11.783706   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:12.040046   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:12.162761   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:12.180053   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:12.283985   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:12.541358   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:12.661262   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:12.680719   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:12.784145   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:13.039377   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:13.160955   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:13.179165   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:13.283602   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:13.539248   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:13.661225   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:13.680316   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:13.783562   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:14.039621   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:14.160982   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:14.179398   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:14.283915   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:14.539446   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:14.660598   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:14.680098   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:14.783276   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:15.040611   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:15.161273   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:15.180105   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:15.283942   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:15.540124   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:15.661802   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:15.679504   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:15.784188   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:16.039739   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:16.161593   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:16.180070   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:16.284822   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:16.539238   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:16.660310   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:16.679470   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:16.783734   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:17.039373   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:17.161296   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:17.180151   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:17.283701   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:17.540225   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:17.661003   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:17.761726   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:17.783644   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:18.039769   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:18.161559   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:18.180182   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:18.283352   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:18.540202   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:18.661598   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:18.679655   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:18.783707   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:19.039499   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:19.161254   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:19.180130   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:19.283902   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:19.539530   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:19.661177   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:19.679706   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:19.784452   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:20.040346   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:20.160644   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:20.180028   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:20.283410   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:20.538896   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:20.661147   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:20.679106   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:20.783435   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:21.038807   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:21.161014   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:21.179345   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:21.283568   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:21.538994   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:21.662006   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:21.679467   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:21.783791   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:22.039201   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:22.160711   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:22.180297   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:22.284007   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:22.539433   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:22.660800   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:22.679062   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:22.784194   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:23.039616   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:23.160951   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:23.179168   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:23.283634   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:23.539153   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:23.660339   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:23.679445   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:23.783620   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:24.038802   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:24.161300   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:24.179718   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:24.284542   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:24.540375   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:24.661941   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:24.681218   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:24.784080   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:25.040304   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:25.161099   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:25.179896   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:25.284901   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:25.540108   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:25.661616   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:25.762076   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:25.782883   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:26.041596   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:26.167630   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:26.179816   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:26.283571   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:26.539143   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:26.662298   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:26.679766   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:26.783940   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:27.052211   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:27.161717   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:27.262964   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:27.284377   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:27.540510   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:27.661638   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:27.680598   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:27.785738   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:28.041263   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:28.161640   16270 kapi.go:107] duration metric: took 1m5.503505975s to wait for kubernetes.io/minikube-addons=registry ...
	I1211 23:57:28.180917   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:28.284502   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:28.538987   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:28.679585   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:28.784373   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:29.065886   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:29.205118   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:29.284195   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:29.540311   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:29.680148   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:29.783762   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:30.039754   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:30.180514   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:30.283862   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:30.539085   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:30.679989   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:30.783892   16270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:31.039901   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:31.180292   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:31.283386   16270 kapi.go:107] duration metric: took 1m8.002672068s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1211 23:57:31.539876   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:31.679435   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:32.050813   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:32.179971   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:32.539656   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:32.680852   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:33.039566   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:33.180184   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:33.539695   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:33.679806   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:34.039433   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:34.180098   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:34.540146   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:34.679779   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:35.039495   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:35.180633   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:35.539608   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:35.680729   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:36.039284   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:36.180412   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:36.539966   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:36.679575   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:37.038997   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:37.179740   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:37.539581   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:37.680950   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:38.039067   16270 kapi.go:107] duration metric: took 1m8.002617213s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1211 23:57:38.042620   16270 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-758245 cluster.
	I1211 23:57:38.043999   16270 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1211 23:57:38.045172   16270 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1211 23:57:38.179907   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:38.683060   16270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:39.179242   16270 kapi.go:107] duration metric: took 1m15.502616024s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1211 23:57:39.180790   16270 out.go:179] * Enabled addons: nvidia-device-plugin, registry-creds, cloud-spanner, storage-provisioner, inspektor-gadget, amd-gpu-device-plugin, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1211 23:57:39.181710   16270 addons.go:530] duration metric: took 1m17.424183357s for enable addons: enabled=[nvidia-device-plugin registry-creds cloud-spanner storage-provisioner inspektor-gadget amd-gpu-device-plugin ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1211 23:57:39.181747   16270 start.go:247] waiting for cluster config update ...
	I1211 23:57:39.181767   16270 start.go:256] writing updated cluster config ...
	I1211 23:57:39.181997   16270 ssh_runner.go:195] Run: rm -f paused
	I1211 23:57:39.185762   16270 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1211 23:57:39.188343   16270 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xxwm5" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:39.191543   16270 pod_ready.go:94] pod "coredns-66bc5c9577-xxwm5" is "Ready"
	I1211 23:57:39.191559   16270 pod_ready.go:86] duration metric: took 3.187758ms for pod "coredns-66bc5c9577-xxwm5" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:39.192944   16270 pod_ready.go:83] waiting for pod "etcd-addons-758245" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:39.195813   16270 pod_ready.go:94] pod "etcd-addons-758245" is "Ready"
	I1211 23:57:39.195827   16270 pod_ready.go:86] duration metric: took 2.86476ms for pod "etcd-addons-758245" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:39.197348   16270 pod_ready.go:83] waiting for pod "kube-apiserver-addons-758245" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:39.200082   16270 pod_ready.go:94] pod "kube-apiserver-addons-758245" is "Ready"
	I1211 23:57:39.200101   16270 pod_ready.go:86] duration metric: took 2.735703ms for pod "kube-apiserver-addons-758245" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:39.201546   16270 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-758245" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:39.589447   16270 pod_ready.go:94] pod "kube-controller-manager-addons-758245" is "Ready"
	I1211 23:57:39.589485   16270 pod_ready.go:86] duration metric: took 387.907975ms for pod "kube-controller-manager-addons-758245" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:39.788833   16270 pod_ready.go:83] waiting for pod "kube-proxy-2ldz5" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:40.189497   16270 pod_ready.go:94] pod "kube-proxy-2ldz5" is "Ready"
	I1211 23:57:40.189519   16270 pod_ready.go:86] duration metric: took 400.664534ms for pod "kube-proxy-2ldz5" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:40.389318   16270 pod_ready.go:83] waiting for pod "kube-scheduler-addons-758245" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:40.789063   16270 pod_ready.go:94] pod "kube-scheduler-addons-758245" is "Ready"
	I1211 23:57:40.789093   16270 pod_ready.go:86] duration metric: took 399.74164ms for pod "kube-scheduler-addons-758245" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 23:57:40.789108   16270 pod_ready.go:40] duration metric: took 1.603316541s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1211 23:57:40.830425   16270 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1211 23:57:40.832093   16270 out.go:179] * Done! kubectl is now configured to use "addons-758245" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 11 23:57:41 addons-758245 crio[774]: time="2025-12-11T23:57:41.64586785Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c38f73d5-62ab-4464-86b8-3700f6e08839 name=/runtime.v1.ImageService/PullImage
	Dec 11 23:57:41 addons-758245 crio[774]: time="2025-12-11T23:57:41.64735768Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 11 23:57:42 addons-758245 crio[774]: time="2025-12-11T23:57:42.248166582Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c38f73d5-62ab-4464-86b8-3700f6e08839 name=/runtime.v1.ImageService/PullImage
	Dec 11 23:57:42 addons-758245 crio[774]: time="2025-12-11T23:57:42.248666059Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3c01e2f6-964f-4571-885b-332734175d73 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 23:57:42 addons-758245 crio[774]: time="2025-12-11T23:57:42.249856424Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f0534309-faf2-4008-9731-46e883e12931 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 23:57:42 addons-758245 crio[774]: time="2025-12-11T23:57:42.253017702Z" level=info msg="Creating container: default/busybox/busybox" id=408580ab-cd51-4538-bd84-232b54087ac8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 11 23:57:42 addons-758245 crio[774]: time="2025-12-11T23:57:42.253115882Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 11 23:57:42 addons-758245 crio[774]: time="2025-12-11T23:57:42.258105737Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 11 23:57:42 addons-758245 crio[774]: time="2025-12-11T23:57:42.258596382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 11 23:57:42 addons-758245 crio[774]: time="2025-12-11T23:57:42.286679977Z" level=info msg="Created container 261d6be7407c9d783bf0da0af76d89573fed0460e34b050dac8fe2b0eea04ea5: default/busybox/busybox" id=408580ab-cd51-4538-bd84-232b54087ac8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 11 23:57:42 addons-758245 crio[774]: time="2025-12-11T23:57:42.28713461Z" level=info msg="Starting container: 261d6be7407c9d783bf0da0af76d89573fed0460e34b050dac8fe2b0eea04ea5" id=a1746208-d302-483a-bd2f-ad80ebe7cb7c name=/runtime.v1.RuntimeService/StartContainer
	Dec 11 23:57:42 addons-758245 crio[774]: time="2025-12-11T23:57:42.288780517Z" level=info msg="Started container" PID=6204 containerID=261d6be7407c9d783bf0da0af76d89573fed0460e34b050dac8fe2b0eea04ea5 description=default/busybox/busybox id=a1746208-d302-483a-bd2f-ad80ebe7cb7c name=/runtime.v1.RuntimeService/StartContainer sandboxID=04414ff40faa42cf9ce72e5afeb858c125af8e79bdfe1c1d71f12d82056a0545
	Dec 11 23:57:50 addons-758245 crio[774]: time="2025-12-11T23:57:50.40487664Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1/POD" id=b4579c5e-e22e-4235-96d5-d63c499506cd name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 11 23:57:50 addons-758245 crio[774]: time="2025-12-11T23:57:50.404937655Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 11 23:57:50 addons-758245 crio[774]: time="2025-12-11T23:57:50.411638812Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1 Namespace:local-path-storage ID:0e2b139f3d4c294603b0311e5dae093ac9dc0bea654233f1baa94c07253faee7 UID:b508e1d7-9a29-4fca-877b-efe4dd5d6aee NetNS:/var/run/netns/9aed9ea5-cec8-4a84-84ed-907d78c0b81d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132ca8}] Aliases:map[]}"
	Dec 11 23:57:50 addons-758245 crio[774]: time="2025-12-11T23:57:50.411673102Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1 to CNI network \"kindnet\" (type=ptp)"
	Dec 11 23:57:50 addons-758245 crio[774]: time="2025-12-11T23:57:50.422398925Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1 Namespace:local-path-storage ID:0e2b139f3d4c294603b0311e5dae093ac9dc0bea654233f1baa94c07253faee7 UID:b508e1d7-9a29-4fca-877b-efe4dd5d6aee NetNS:/var/run/netns/9aed9ea5-cec8-4a84-84ed-907d78c0b81d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132ca8}] Aliases:map[]}"
	Dec 11 23:57:50 addons-758245 crio[774]: time="2025-12-11T23:57:50.422587315Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1 for CNI network kindnet (type=ptp)"
	Dec 11 23:57:50 addons-758245 crio[774]: time="2025-12-11T23:57:50.423731852Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 11 23:57:50 addons-758245 crio[774]: time="2025-12-11T23:57:50.425094498Z" level=info msg="Ran pod sandbox 0e2b139f3d4c294603b0311e5dae093ac9dc0bea654233f1baa94c07253faee7 with infra container: local-path-storage/helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1/POD" id=b4579c5e-e22e-4235-96d5-d63c499506cd name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 11 23:57:50 addons-758245 crio[774]: time="2025-12-11T23:57:50.426288623Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=f4c93be1-4bb0-4f31-b6f3-ab495cd84dfd name=/runtime.v1.ImageService/ImageStatus
	Dec 11 23:57:50 addons-758245 crio[774]: time="2025-12-11T23:57:50.42650558Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=f4c93be1-4bb0-4f31-b6f3-ab495cd84dfd name=/runtime.v1.ImageService/ImageStatus
	Dec 11 23:57:50 addons-758245 crio[774]: time="2025-12-11T23:57:50.426572923Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=f4c93be1-4bb0-4f31-b6f3-ab495cd84dfd name=/runtime.v1.ImageService/ImageStatus
	Dec 11 23:57:50 addons-758245 crio[774]: time="2025-12-11T23:57:50.42714306Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=6a0ba633-2c02-4b18-b3e2-d921cfeaa448 name=/runtime.v1.ImageService/PullImage
	Dec 11 23:57:50 addons-758245 crio[774]: time="2025-12-11T23:57:50.428556791Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	261d6be7407c9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   04414ff40faa4       busybox                                     default
	3732d2a3f8838       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          13 seconds ago       Running             csi-snapshotter                          0                   b7406976110bc       csi-hostpathplugin-5nn2t                    kube-system
	6f99d26cf0c1e       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             13 seconds ago       Exited              patch                                    2                   79682b8051f8a       gcp-auth-certs-patch-t8kqf                  gcp-auth
	5cd186f4373ab       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 13 seconds ago       Running             gcp-auth                                 0                   f765eaa715430       gcp-auth-78565c9fb4-4bv9l                   gcp-auth
	eb9ef1b7664a7       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          15 seconds ago       Running             csi-provisioner                          0                   b7406976110bc       csi-hostpathplugin-5nn2t                    kube-system
	569f61ef928d0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            16 seconds ago       Running             liveness-probe                           0                   b7406976110bc       csi-hostpathplugin-5nn2t                    kube-system
	c3b2706600ae1       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           16 seconds ago       Running             hostpath                                 0                   b7406976110bc       csi-hostpathplugin-5nn2t                    kube-system
	e659f5026c837       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            17 seconds ago       Running             gadget                                   0                   f3c1d42c1401f       gadget-bqt8q                                gadget
	664ac9c24ea7c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                19 seconds ago       Running             node-driver-registrar                    0                   b7406976110bc       csi-hostpathplugin-5nn2t                    kube-system
	2eed610f0b598       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             20 seconds ago       Running             controller                               0                   ec5180c4baa6d       ingress-nginx-controller-85d4c799dd-p5j5s   ingress-nginx
	64e96cfc8140e       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              23 seconds ago       Running             registry-proxy                           0                   368fa3ba4a572       registry-proxy-9n7hj                        kube-system
	335878b08bc59       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     25 seconds ago       Running             nvidia-device-plugin-ctr                 0                   e3cfa9c4ff142       nvidia-device-plugin-daemonset-5r9hw        kube-system
	1352ceaf0a2b3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   27 seconds ago       Exited              create                                   0                   9550101b37043       gcp-auth-certs-create-lqwvz                 gcp-auth
	6cbca1534843d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   28 seconds ago       Running             csi-external-health-monitor-controller   0                   b7406976110bc       csi-hostpathplugin-5nn2t                    kube-system
	e70723d59b2b1       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     29 seconds ago       Running             amd-gpu-device-plugin                    0                   8a81a3d12a757       amd-gpu-device-plugin-t4nwx                 kube-system
	2b49798329427       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   30 seconds ago       Exited              patch                                    0                   8533d0fb19ad2       ingress-nginx-admission-patch-cd4bb         ingress-nginx
	7e0fe81bd1c04       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      30 seconds ago       Running             volume-snapshot-controller               0                   baea1032a922e       snapshot-controller-7d9fbc56b8-4vpl2        kube-system
	a69fa017a8d1a       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             30 seconds ago       Running             csi-attacher                             0                   c74618a3c25db       csi-hostpath-attacher-0                     kube-system
	d4852af2ae305       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             31 seconds ago       Running             local-path-provisioner                   0                   f4ded4c1ed6d3       local-path-provisioner-648f6765c9-6wlm4     local-path-storage
	25bad7ed4d19e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   32 seconds ago       Exited              create                                   0                   4a1582e49a1b7       ingress-nginx-admission-create-r7lnc        ingress-nginx
	8e6d8441c0b88       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      33 seconds ago       Running             volume-snapshot-controller               0                   5c1b0fc983bec       snapshot-controller-7d9fbc56b8-tgsc4        kube-system
	b054779b4f384       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              34 seconds ago       Running             yakd                                     0                   87f53fc55af57       yakd-dashboard-5ff678cb9-9jv4p              yakd-dashboard
	aeaae182100e6       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              36 seconds ago       Running             csi-resizer                              0                   0afe76df49e5e       csi-hostpath-resizer-0                      kube-system
	b246671cb7ebd       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        37 seconds ago       Running             metrics-server                           0                   a1c838608ad1a       metrics-server-85b7d694d7-lzpx2             kube-system
	533b2e5a9c2d2       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               39 seconds ago       Running             minikube-ingress-dns                     0                   cd2edf1bf3a73       kube-ingress-dns-minikube                   kube-system
	d00859d804fd2       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               44 seconds ago       Running             cloud-spanner-emulator                   0                   f5daef4c23552       cloud-spanner-emulator-5bdddb765-ttz8g      default
	138f9c8dcb50c       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           46 seconds ago       Running             registry                                 0                   95fb50f592ecc       registry-6b586f9694-ctpbw                   kube-system
	a095ba34edcca       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             47 seconds ago       Running             coredns                                  0                   ea3a14f569e09       coredns-66bc5c9577-xxwm5                    kube-system
	9a13d73cda53b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             48 seconds ago       Running             storage-provisioner                      0                   48f5833231028       storage-provisioner                         kube-system
	746ec0a05954f       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago   Running             kube-proxy                               0                   3ea4471cba438       kube-proxy-2ldz5                            kube-system
	9f62b444d09ef       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   e84c85e088226       kindnet-vctlp                               kube-system
	c4b3ad93ba2e0       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             About a minute ago   Running             kube-apiserver                           0                   5bbfb6d65f68e       kube-apiserver-addons-758245                kube-system
	47879b4c9f9dd       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             About a minute ago   Running             kube-controller-manager                  0                   bc07433e66205       kube-controller-manager-addons-758245       kube-system
	49a709e50508b       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   64e73b4884718       etcd-addons-758245                          kube-system
	0ff7242204c8f       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             About a minute ago   Running             kube-scheduler                           0                   1330ac5787bff       kube-scheduler-addons-758245                kube-system
	
	
	==> coredns [a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e] <==
	[INFO] 10.244.0.17:47665 - 42771 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000151673s
	[INFO] 10.244.0.17:35417 - 38757 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00008511s
	[INFO] 10.244.0.17:35417 - 39015 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012065s
	[INFO] 10.244.0.17:49670 - 41744 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000051699s
	[INFO] 10.244.0.17:49670 - 42015 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000099612s
	[INFO] 10.244.0.17:36231 - 17699 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000066408s
	[INFO] 10.244.0.17:36231 - 17452 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000109278s
	[INFO] 10.244.0.17:53946 - 51082 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000072734s
	[INFO] 10.244.0.17:53946 - 50852 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000111724s
	[INFO] 10.244.0.17:56262 - 24145 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000076216s
	[INFO] 10.244.0.17:56262 - 23993 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000103684s
	[INFO] 10.244.0.22:38886 - 53734 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000200426s
	[INFO] 10.244.0.22:42946 - 33010 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000290989s
	[INFO] 10.244.0.22:57793 - 44967 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125488s
	[INFO] 10.244.0.22:44700 - 12547 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000148897s
	[INFO] 10.244.0.22:52739 - 35100 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00010872s
	[INFO] 10.244.0.22:42253 - 38035 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000130161s
	[INFO] 10.244.0.22:58179 - 3192 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005489818s
	[INFO] 10.244.0.22:39433 - 43415 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005649837s
	[INFO] 10.244.0.22:55774 - 43919 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004270109s
	[INFO] 10.244.0.22:49549 - 2075 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004477515s
	[INFO] 10.244.0.22:51959 - 4964 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004353981s
	[INFO] 10.244.0.22:52561 - 64015 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004507092s
	[INFO] 10.244.0.22:55871 - 30236 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002290435s
	[INFO] 10.244.0.22:52156 - 45926 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00233035s
	
	
	==> describe nodes <==
	Name:               addons-758245
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-758245
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=addons-758245
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_11T23_56_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-758245
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-758245"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 11 Dec 2025 23:56:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-758245
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 11 Dec 2025 23:57:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 11 Dec 2025 23:57:47 +0000   Thu, 11 Dec 2025 23:56:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 11 Dec 2025 23:57:47 +0000   Thu, 11 Dec 2025 23:56:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 11 Dec 2025 23:57:47 +0000   Thu, 11 Dec 2025 23:56:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 11 Dec 2025 23:57:47 +0000   Thu, 11 Dec 2025 23:57:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-758245
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                7478ade5-211a-4ace-866e-8b508dbc1779
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-5bdddb765-ttz8g                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  gadget                      gadget-bqt8q                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  gcp-auth                    gcp-auth-78565c9fb4-4bv9l                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-p5j5s                     100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         88s
	  kube-system                 amd-gpu-device-plugin-t4nwx                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 coredns-66bc5c9577-xxwm5                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     90s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 csi-hostpathplugin-5nn2t                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 etcd-addons-758245                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         96s
	  kube-system                 kindnet-vctlp                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      90s
	  kube-system                 kube-apiserver-addons-758245                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-addons-758245                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-2ldz5                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-addons-758245                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 metrics-server-85b7d694d7-lzpx2                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         88s
	  kube-system                 nvidia-device-plugin-daemonset-5r9hw                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 registry-6b586f9694-ctpbw                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 registry-creds-764b6fb674-rkslf                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 registry-proxy-9n7hj                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 snapshot-controller-7d9fbc56b8-4vpl2                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 snapshot-controller-7d9fbc56b8-tgsc4                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  local-path-storage          helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  local-path-storage          local-path-provisioner-648f6765c9-6wlm4                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-9jv4p                                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 88s                  kube-proxy       
	  Normal  Starting                 100s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  100s (x8 over 100s)  kubelet          Node addons-758245 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s (x8 over 100s)  kubelet          Node addons-758245 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s (x8 over 100s)  kubelet          Node addons-758245 status is now: NodeHasSufficientPID
	  Normal  Starting                 96s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s                  kubelet          Node addons-758245 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                  kubelet          Node addons-758245 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                  kubelet          Node addons-758245 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           91s                  node-controller  Node addons-758245 event: Registered Node addons-758245 in Controller
	  Normal  NodeReady                49s                  kubelet          Node addons-758245 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec11 23:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001845] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.355656] i8042: Warning: Keylock active
	[  +0.012251] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.467785] block sda: the capability attribute has been deprecated.
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47] <==
	{"level":"warn","ts":"2025-12-11T23:56:12.848195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.863385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.875690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.882640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.890787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.897722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.905344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.913175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.920223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.926926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.933962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.940704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.947776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.955065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.961061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.967936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.988461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:12.995949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:13.003567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:13.059994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:24.110764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:50.458901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:50.465314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:50.480890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T23:56:50.487323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43288","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [5cd186f4373abe8e8259e028de1c7e0ad7a5177f5742d52190634427325d42a7] <==
	2025/12/11 23:57:37 GCP Auth Webhook started!
	2025/12/11 23:57:41 Ready to marshal response ...
	2025/12/11 23:57:41 Ready to write response ...
	2025/12/11 23:57:41 Ready to marshal response ...
	2025/12/11 23:57:41 Ready to write response ...
	2025/12/11 23:57:41 Ready to marshal response ...
	2025/12/11 23:57:41 Ready to write response ...
	2025/12/11 23:57:50 Ready to marshal response ...
	2025/12/11 23:57:50 Ready to write response ...
	2025/12/11 23:57:50 Ready to marshal response ...
	2025/12/11 23:57:50 Ready to write response ...
	
	
	==> kernel <==
	 23:57:51 up 40 min,  0 user,  load average: 1.85, 0.90, 0.35
	Linux addons-758245 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc] <==
	I1211 23:56:22.294521       1 main.go:148] setting mtu 1500 for CNI 
	I1211 23:56:22.294536       1 main.go:178] kindnetd IP family: "ipv4"
	I1211 23:56:22.294560       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-11T23:56:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1211 23:56:22.589994       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1211 23:56:22.590021       1 controller.go:381] "Waiting for informer caches to sync"
	I1211 23:56:22.590033       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1211 23:56:22.590470       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1211 23:56:52.590767       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1211 23:56:52.590770       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1211 23:56:52.590770       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1211 23:56:52.590834       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1211 23:56:54.190465       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1211 23:56:54.190504       1 metrics.go:72] Registering metrics
	I1211 23:56:54.190555       1 controller.go:711] "Syncing nftables rules"
	I1211 23:57:02.597171       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:57:02.597221       1 main.go:301] handling current node
	I1211 23:57:12.590344       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:57:12.590450       1 main.go:301] handling current node
	I1211 23:57:22.590260       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:57:22.590285       1 main.go:301] handling current node
	I1211 23:57:32.590140       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:57:32.590181       1 main.go:301] handling current node
	I1211 23:57:42.590689       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:57:42.590716       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69] <==
	E1211 23:57:14.845994       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1211 23:57:14.846288       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.203.240:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.203.240:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.203.240:443: connect: connection refused" logger="UnhandledError"
	E1211 23:57:14.847651       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.203.240:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.203.240:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.203.240:443: connect: connection refused" logger="UnhandledError"
	E1211 23:57:14.853747       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.203.240:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.203.240:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.203.240:443: connect: connection refused" logger="UnhandledError"
	W1211 23:57:15.846977       1 handler_proxy.go:99] no RequestInfo found in the context
	W1211 23:57:15.847011       1 handler_proxy.go:99] no RequestInfo found in the context
	E1211 23:57:15.847025       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1211 23:57:15.847048       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1211 23:57:15.847091       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1211 23:57:15.848214       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1211 23:57:19.878714       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.203.240:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.203.240:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1211 23:57:19.878830       1 handler_proxy.go:99] no RequestInfo found in the context
	E1211 23:57:19.878867       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1211 23:57:19.889834       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1211 23:57:49.456688       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46696: use of closed network connection
	E1211 23:57:49.594752       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46726: use of closed network connection
	
	
	==> kube-controller-manager [47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e] <==
	I1211 23:56:20.444372       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1211 23:56:20.444382       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1211 23:56:20.444351       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1211 23:56:20.444388       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1211 23:56:20.444430       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1211 23:56:20.444442       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1211 23:56:20.444485       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1211 23:56:20.444487       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1211 23:56:20.444607       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1211 23:56:20.445035       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1211 23:56:20.445051       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1211 23:56:20.445859       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1211 23:56:20.446829       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1211 23:56:20.449331       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1211 23:56:20.458542       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1211 23:56:20.467060       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1211 23:56:23.033963       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1211 23:56:50.453354       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1211 23:56:50.453511       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1211 23:56:50.453546       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1211 23:56:50.472899       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1211 23:56:50.476017       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1211 23:56:50.554565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1211 23:56:50.576744       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1211 23:57:05.400604       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7] <==
	I1211 23:56:22.110162       1 server_linux.go:53] "Using iptables proxy"
	I1211 23:56:22.273415       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1211 23:56:22.376603       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1211 23:56:22.379647       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1211 23:56:22.379772       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 23:56:22.687004       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1211 23:56:22.687154       1 server_linux.go:132] "Using iptables Proxier"
	I1211 23:56:22.776865       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 23:56:22.785429       1 server.go:527] "Version info" version="v1.34.2"
	I1211 23:56:22.785710       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 23:56:22.787827       1 config.go:200] "Starting service config controller"
	I1211 23:56:22.788056       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1211 23:56:22.788685       1 config.go:106] "Starting endpoint slice config controller"
	I1211 23:56:22.789522       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1211 23:56:22.788784       1 config.go:309] "Starting node config controller"
	I1211 23:56:22.789602       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1211 23:56:22.789627       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1211 23:56:22.789264       1 config.go:403] "Starting serviceCIDR config controller"
	I1211 23:56:22.789671       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1211 23:56:22.889681       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1211 23:56:22.889735       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1211 23:56:22.891986       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead] <==
	E1211 23:56:13.463415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1211 23:56:13.463536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1211 23:56:13.463994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1211 23:56:13.464083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1211 23:56:13.464182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1211 23:56:13.464215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1211 23:56:13.464246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1211 23:56:13.464255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1211 23:56:13.464265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1211 23:56:13.464286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1211 23:56:13.464337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1211 23:56:13.464364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1211 23:56:13.464364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1211 23:56:13.464415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1211 23:56:13.464541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1211 23:56:14.297241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1211 23:56:14.302184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1211 23:56:14.355170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1211 23:56:14.384272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1211 23:56:14.464779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1211 23:56:14.487731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1211 23:56:14.591157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1211 23:56:14.638138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1211 23:56:14.651089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1211 23:56:17.061868       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 11 23:57:27 addons-758245 kubelet[1293]: I1211 23:57:27.881142    1293 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9n7hj" secret="" err="secret \"gcp-auth\" not found"
	Dec 11 23:57:27 addons-758245 kubelet[1293]: I1211 23:57:27.897348    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-9n7hj" podStartSLOduration=2.004999885 podStartE2EDuration="25.897328949s" podCreationTimestamp="2025-12-11 23:57:02 +0000 UTC" firstStartedPulling="2025-12-11 23:57:03.22122935 +0000 UTC m=+47.642424059" lastFinishedPulling="2025-12-11 23:57:27.113558399 +0000 UTC m=+71.534753123" observedRunningTime="2025-12-11 23:57:27.896803172 +0000 UTC m=+72.317997898" watchObservedRunningTime="2025-12-11 23:57:27.897328949 +0000 UTC m=+72.318523680"
	Dec 11 23:57:28 addons-758245 kubelet[1293]: I1211 23:57:28.885669    1293 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9n7hj" secret="" err="secret \"gcp-auth\" not found"
	Dec 11 23:57:30 addons-758245 kubelet[1293]: I1211 23:57:30.905676    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-p5j5s" podStartSLOduration=55.933094184 podStartE2EDuration="1m7.905656532s" podCreationTimestamp="2025-12-11 23:56:23 +0000 UTC" firstStartedPulling="2025-12-11 23:57:18.648429871 +0000 UTC m=+63.069624580" lastFinishedPulling="2025-12-11 23:57:30.620992216 +0000 UTC m=+75.042186928" observedRunningTime="2025-12-11 23:57:30.905133167 +0000 UTC m=+75.326327898" watchObservedRunningTime="2025-12-11 23:57:30.905656532 +0000 UTC m=+75.326851261"
	Dec 11 23:57:33 addons-758245 kubelet[1293]: I1211 23:57:33.923703    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-bqt8q" podStartSLOduration=64.550279036 podStartE2EDuration="1m10.923683575s" podCreationTimestamp="2025-12-11 23:56:23 +0000 UTC" firstStartedPulling="2025-12-11 23:57:27.176608791 +0000 UTC m=+71.597803500" lastFinishedPulling="2025-12-11 23:57:33.550013328 +0000 UTC m=+77.971208039" observedRunningTime="2025-12-11 23:57:33.923195304 +0000 UTC m=+78.344390033" watchObservedRunningTime="2025-12-11 23:57:33.923683575 +0000 UTC m=+78.344878304"
	Dec 11 23:57:34 addons-758245 kubelet[1293]: E1211 23:57:34.595750    1293 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 11 23:57:34 addons-758245 kubelet[1293]: E1211 23:57:34.595857    1293 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7a05cdd-171e-4999-9c92-9696f6943f0c-gcr-creds podName:d7a05cdd-171e-4999-9c92-9696f6943f0c nodeName:}" failed. No retries permitted until 2025-12-11 23:58:06.595830164 +0000 UTC m=+111.017024894 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/d7a05cdd-171e-4999-9c92-9696f6943f0c-gcr-creds") pod "registry-creds-764b6fb674-rkslf" (UID: "d7a05cdd-171e-4999-9c92-9696f6943f0c") : secret "registry-creds-gcr" not found
	Dec 11 23:57:34 addons-758245 kubelet[1293]: I1211 23:57:34.704293    1293 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 11 23:57:34 addons-758245 kubelet[1293]: I1211 23:57:34.704340    1293 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 11 23:57:37 addons-758245 kubelet[1293]: I1211 23:57:37.654578    1293 scope.go:117] "RemoveContainer" containerID="4bcf74b4fcdfcb0428571deb67b284887fb1190166703adb2b70ccb5ec8c6959"
	Dec 11 23:57:37 addons-758245 kubelet[1293]: I1211 23:57:37.935668    1293 scope.go:117] "RemoveContainer" containerID="4bcf74b4fcdfcb0428571deb67b284887fb1190166703adb2b70ccb5ec8c6959"
	Dec 11 23:57:38 addons-758245 kubelet[1293]: I1211 23:57:38.038671    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-4bv9l" podStartSLOduration=65.686588093 podStartE2EDuration="1m8.038651286s" podCreationTimestamp="2025-12-11 23:56:30 +0000 UTC" firstStartedPulling="2025-12-11 23:57:34.861901532 +0000 UTC m=+79.283096254" lastFinishedPulling="2025-12-11 23:57:37.213964736 +0000 UTC m=+81.635159447" observedRunningTime="2025-12-11 23:57:37.957736429 +0000 UTC m=+82.378931159" watchObservedRunningTime="2025-12-11 23:57:38.038651286 +0000 UTC m=+82.459846016"
	Dec 11 23:57:38 addons-758245 kubelet[1293]: I1211 23:57:38.960317    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-5nn2t" podStartSLOduration=2.07286407 podStartE2EDuration="36.960298277s" podCreationTimestamp="2025-12-11 23:57:02 +0000 UTC" firstStartedPulling="2025-12-11 23:57:03.140361858 +0000 UTC m=+47.561556580" lastFinishedPulling="2025-12-11 23:57:38.027796078 +0000 UTC m=+82.448990787" observedRunningTime="2025-12-11 23:57:38.958678888 +0000 UTC m=+83.379873638" watchObservedRunningTime="2025-12-11 23:57:38.960298277 +0000 UTC m=+83.381493007"
	Dec 11 23:57:39 addons-758245 kubelet[1293]: I1211 23:57:39.131184    1293 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rfff\" (UniqueName: \"kubernetes.io/projected/e68e6c17-81e8-4e2d-a1f6-b6113bf760e6-kube-api-access-8rfff\") pod \"e68e6c17-81e8-4e2d-a1f6-b6113bf760e6\" (UID: \"e68e6c17-81e8-4e2d-a1f6-b6113bf760e6\") "
	Dec 11 23:57:39 addons-758245 kubelet[1293]: I1211 23:57:39.133892    1293 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68e6c17-81e8-4e2d-a1f6-b6113bf760e6-kube-api-access-8rfff" (OuterVolumeSpecName: "kube-api-access-8rfff") pod "e68e6c17-81e8-4e2d-a1f6-b6113bf760e6" (UID: "e68e6c17-81e8-4e2d-a1f6-b6113bf760e6"). InnerVolumeSpecName "kube-api-access-8rfff". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 11 23:57:39 addons-758245 kubelet[1293]: I1211 23:57:39.232423    1293 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8rfff\" (UniqueName: \"kubernetes.io/projected/e68e6c17-81e8-4e2d-a1f6-b6113bf760e6-kube-api-access-8rfff\") on node \"addons-758245\" DevicePath \"\""
	Dec 11 23:57:39 addons-758245 kubelet[1293]: I1211 23:57:39.951561    1293 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79682b8051f8a4f0403ccb75dcbd32f5c53ff4e9b5f17e33560a3bebd9015b5d"
	Dec 11 23:57:41 addons-758245 kubelet[1293]: I1211 23:57:41.449443    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/66f33ee0-430e-4cba-bcc7-3d37526bc70d-gcp-creds\") pod \"busybox\" (UID: \"66f33ee0-430e-4cba-bcc7-3d37526bc70d\") " pod="default/busybox"
	Dec 11 23:57:41 addons-758245 kubelet[1293]: I1211 23:57:41.449533    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5b4q\" (UniqueName: \"kubernetes.io/projected/66f33ee0-430e-4cba-bcc7-3d37526bc70d-kube-api-access-x5b4q\") pod \"busybox\" (UID: \"66f33ee0-430e-4cba-bcc7-3d37526bc70d\") " pod="default/busybox"
	Dec 11 23:57:42 addons-758245 kubelet[1293]: I1211 23:57:42.976699    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.372859413 podStartE2EDuration="1.976681565s" podCreationTimestamp="2025-12-11 23:57:41 +0000 UTC" firstStartedPulling="2025-12-11 23:57:41.645522573 +0000 UTC m=+86.066717285" lastFinishedPulling="2025-12-11 23:57:42.24934471 +0000 UTC m=+86.670539437" observedRunningTime="2025-12-11 23:57:42.974676509 +0000 UTC m=+87.395871239" watchObservedRunningTime="2025-12-11 23:57:42.976681565 +0000 UTC m=+87.397876296"
	Dec 11 23:57:49 addons-758245 kubelet[1293]: E1211 23:57:49.456619    1293 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49006->127.0.0.1:46579: write tcp 127.0.0.1:49006->127.0.0.1:46579: write: broken pipe
	Dec 11 23:57:50 addons-758245 kubelet[1293]: I1211 23:57:50.105006    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b508e1d7-9a29-4fca-877b-efe4dd5d6aee-script\") pod \"helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1\" (UID: \"b508e1d7-9a29-4fca-877b-efe4dd5d6aee\") " pod="local-path-storage/helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1"
	Dec 11 23:57:50 addons-758245 kubelet[1293]: I1211 23:57:50.105091    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b508e1d7-9a29-4fca-877b-efe4dd5d6aee-data\") pod \"helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1\" (UID: \"b508e1d7-9a29-4fca-877b-efe4dd5d6aee\") " pod="local-path-storage/helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1"
	Dec 11 23:57:50 addons-758245 kubelet[1293]: I1211 23:57:50.105142    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg592\" (UniqueName: \"kubernetes.io/projected/b508e1d7-9a29-4fca-877b-efe4dd5d6aee-kube-api-access-fg592\") pod \"helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1\" (UID: \"b508e1d7-9a29-4fca-877b-efe4dd5d6aee\") " pod="local-path-storage/helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1"
	Dec 11 23:57:50 addons-758245 kubelet[1293]: I1211 23:57:50.105177    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b508e1d7-9a29-4fca-877b-efe4dd5d6aee-gcp-creds\") pod \"helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1\" (UID: \"b508e1d7-9a29-4fca-877b-efe4dd5d6aee\") " pod="local-path-storage/helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1"
	
	
	==> storage-provisioner [9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98] <==
	W1211 23:57:27.320047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:29.322880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:29.326045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:31.328773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:31.331818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:33.334813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:33.340202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:35.343982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:35.349512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:37.351911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:37.354668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:39.357429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:39.361723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:41.363924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:41.367140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:43.370216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:43.373383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:45.375871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:45.380984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:47.383854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:47.386930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:49.389231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:49.393749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:51.396787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1211 23:57:51.399993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-758245 -n addons-758245
helpers_test.go:270: (dbg) Run:  kubectl --context addons-758245 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: test-local-path gcp-auth-certs-create-lqwvz gcp-auth-certs-patch-t8kqf ingress-nginx-admission-create-r7lnc ingress-nginx-admission-patch-cd4bb registry-creds-764b6fb674-rkslf helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-758245 describe pod test-local-path gcp-auth-certs-create-lqwvz gcp-auth-certs-patch-t8kqf ingress-nginx-admission-create-r7lnc ingress-nginx-admission-patch-cd4bb registry-creds-764b6fb674-rkslf helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-758245 describe pod test-local-path gcp-auth-certs-create-lqwvz gcp-auth-certs-patch-t8kqf ingress-nginx-admission-create-r7lnc ingress-nginx-admission-patch-cd4bb registry-creds-764b6fb674-rkslf helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1: exit status 1 (69.651857ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-brnxk (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-brnxk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-lqwvz" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-t8kqf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-r7lnc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-cd4bb" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-rkslf" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-758245 describe pod test-local-path gcp-auth-certs-create-lqwvz gcp-auth-certs-patch-t8kqf ingress-nginx-admission-create-r7lnc ingress-nginx-admission-patch-cd4bb registry-creds-764b6fb674-rkslf helper-pod-create-pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758245 addons disable headlamp --alsologtostderr -v=1: exit status 11 (240.131935ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 23:57:52.108416   25291 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:57:52.108578   25291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:57:52.108588   25291 out.go:374] Setting ErrFile to fd 2...
	I1211 23:57:52.108592   25291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:57:52.108793   25291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:57:52.109046   25291 mustload.go:66] Loading cluster: addons-758245
	I1211 23:57:52.109394   25291 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:57:52.109412   25291 addons.go:622] checking whether the cluster is paused
	I1211 23:57:52.109526   25291 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:57:52.109544   25291 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:57:52.110036   25291 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:57:52.130438   25291 ssh_runner.go:195] Run: systemctl --version
	I1211 23:57:52.130516   25291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:57:52.147650   25291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:57:52.242743   25291 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:57:52.242824   25291 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:57:52.269115   25291 cri.go:89] found id: "3732d2a3f883872d29a8fa95430ad51d25831fcfea4e55c7896a327d96599bcf"
	I1211 23:57:52.269141   25291 cri.go:89] found id: "eb9ef1b7664a78fa036df7cd1e340b260c6103f0043a248e15b8c4f874624584"
	I1211 23:57:52.269147   25291 cri.go:89] found id: "569f61ef928d0f8bcfb0087922cfff40b1d5e27ac23e6c818e3a160cce92903a"
	I1211 23:57:52.269154   25291 cri.go:89] found id: "c3b2706600ae15c248169ff8e9696d286596bb41798aadd3ed7d5f04ddeaeb09"
	I1211 23:57:52.269158   25291 cri.go:89] found id: "664ac9c24ea7c293c526b1eee38f31561cb464c0f1fb15991e83439838893a9f"
	I1211 23:57:52.269164   25291 cri.go:89] found id: "64e96cfc8140e0ea508cbaa4b1f46f6a77ac38ca8dedd5a6ac707b7d8a935013"
	I1211 23:57:52.269168   25291 cri.go:89] found id: "335878b08bc5952d78cfe80774396d448d7e43cfc4df78fe9b41b92ea329360a"
	I1211 23:57:52.269172   25291 cri.go:89] found id: "6cbca1534843d1584cdf90b2c3a08b630e41bed36ba8785830272efa337811be"
	I1211 23:57:52.269177   25291 cri.go:89] found id: "e70723d59b2b179aeebdf126b5c686f98bc9e70ce305f547af722fcfa5f6570b"
	I1211 23:57:52.269186   25291 cri.go:89] found id: "7e0fe81bd1c04f2678ef0211090a7a638b8f1cde3d56e0048932736929dbfa94"
	I1211 23:57:52.269195   25291 cri.go:89] found id: "a69fa017a8d1a4e1e016c291adabf69f06139db6064f64f4696832dec41770b5"
	I1211 23:57:52.269200   25291 cri.go:89] found id: "8e6d8441c0b88b4af7d7ace2ca8455346456cb30cd8105a796edc75453a87f91"
	I1211 23:57:52.269207   25291 cri.go:89] found id: "aeaae182100e69fb94d63381a76545d31a0d9d27e8df794dcf898674b0374925"
	I1211 23:57:52.269212   25291 cri.go:89] found id: "b246671cb7ebd1332e264290b1d14d25b693cac91d9b52fd3540285bf08677f6"
	I1211 23:57:52.269216   25291 cri.go:89] found id: "533b2e5a9c2d28cc131486b31c27bcf25934c32a40a567192995ee18952d1f74"
	I1211 23:57:52.269227   25291 cri.go:89] found id: "138f9c8dcb50c0ecbf735235efdec9c471f54bceb1f758e03d1f66245eb03e52"
	I1211 23:57:52.269234   25291 cri.go:89] found id: "a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e"
	I1211 23:57:52.269239   25291 cri.go:89] found id: "9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98"
	I1211 23:57:52.269244   25291 cri.go:89] found id: "746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7"
	I1211 23:57:52.269248   25291 cri.go:89] found id: "9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc"
	I1211 23:57:52.269260   25291 cri.go:89] found id: "c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69"
	I1211 23:57:52.269269   25291 cri.go:89] found id: "47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e"
	I1211 23:57:52.269273   25291 cri.go:89] found id: "49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47"
	I1211 23:57:52.269280   25291 cri.go:89] found id: "0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead"
	I1211 23:57:52.269285   25291 cri.go:89] found id: ""
	I1211 23:57:52.269335   25291 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 23:57:52.282170   25291 out.go:203] 
	W1211 23:57:52.283321   25291 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:57:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:57:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1211 23:57:52.283342   25291 out.go:285] * 
	* 
	W1211 23:57:52.286150   25291 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 23:57:52.287226   25291 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-758245 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.46s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-ttz8g" [e6a7d46e-8943-4e56-9d5e-b5543b8a4a66] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003198531s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758245 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (261.624843ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 23:58:08.032441   27511 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:58:08.032584   27511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:08.032596   27511 out.go:374] Setting ErrFile to fd 2...
	I1211 23:58:08.032603   27511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:08.032807   27511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:58:08.033084   27511 mustload.go:66] Loading cluster: addons-758245
	I1211 23:58:08.033443   27511 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:08.033465   27511 addons.go:622] checking whether the cluster is paused
	I1211 23:58:08.033569   27511 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:08.033585   27511 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:58:08.033951   27511 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:58:08.053073   27511 ssh_runner.go:195] Run: systemctl --version
	I1211 23:58:08.053155   27511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:58:08.070186   27511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:58:08.167735   27511 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:58:08.167812   27511 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:58:08.198662   27511 cri.go:89] found id: "bebb2b9dc0db586ac1f726cd9cf64469bb0451cec9c7113a8be04e821596c916"
	I1211 23:58:08.198686   27511 cri.go:89] found id: "3732d2a3f883872d29a8fa95430ad51d25831fcfea4e55c7896a327d96599bcf"
	I1211 23:58:08.198693   27511 cri.go:89] found id: "eb9ef1b7664a78fa036df7cd1e340b260c6103f0043a248e15b8c4f874624584"
	I1211 23:58:08.198698   27511 cri.go:89] found id: "569f61ef928d0f8bcfb0087922cfff40b1d5e27ac23e6c818e3a160cce92903a"
	I1211 23:58:08.198703   27511 cri.go:89] found id: "c3b2706600ae15c248169ff8e9696d286596bb41798aadd3ed7d5f04ddeaeb09"
	I1211 23:58:08.198709   27511 cri.go:89] found id: "664ac9c24ea7c293c526b1eee38f31561cb464c0f1fb15991e83439838893a9f"
	I1211 23:58:08.198713   27511 cri.go:89] found id: "64e96cfc8140e0ea508cbaa4b1f46f6a77ac38ca8dedd5a6ac707b7d8a935013"
	I1211 23:58:08.198718   27511 cri.go:89] found id: "335878b08bc5952d78cfe80774396d448d7e43cfc4df78fe9b41b92ea329360a"
	I1211 23:58:08.198723   27511 cri.go:89] found id: "6cbca1534843d1584cdf90b2c3a08b630e41bed36ba8785830272efa337811be"
	I1211 23:58:08.198744   27511 cri.go:89] found id: "e70723d59b2b179aeebdf126b5c686f98bc9e70ce305f547af722fcfa5f6570b"
	I1211 23:58:08.198752   27511 cri.go:89] found id: "7e0fe81bd1c04f2678ef0211090a7a638b8f1cde3d56e0048932736929dbfa94"
	I1211 23:58:08.198756   27511 cri.go:89] found id: "a69fa017a8d1a4e1e016c291adabf69f06139db6064f64f4696832dec41770b5"
	I1211 23:58:08.198761   27511 cri.go:89] found id: "8e6d8441c0b88b4af7d7ace2ca8455346456cb30cd8105a796edc75453a87f91"
	I1211 23:58:08.198765   27511 cri.go:89] found id: "aeaae182100e69fb94d63381a76545d31a0d9d27e8df794dcf898674b0374925"
	I1211 23:58:08.198770   27511 cri.go:89] found id: "b246671cb7ebd1332e264290b1d14d25b693cac91d9b52fd3540285bf08677f6"
	I1211 23:58:08.198777   27511 cri.go:89] found id: "533b2e5a9c2d28cc131486b31c27bcf25934c32a40a567192995ee18952d1f74"
	I1211 23:58:08.198782   27511 cri.go:89] found id: "138f9c8dcb50c0ecbf735235efdec9c471f54bceb1f758e03d1f66245eb03e52"
	I1211 23:58:08.198788   27511 cri.go:89] found id: "a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e"
	I1211 23:58:08.198791   27511 cri.go:89] found id: "9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98"
	I1211 23:58:08.198795   27511 cri.go:89] found id: "746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7"
	I1211 23:58:08.198802   27511 cri.go:89] found id: "9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc"
	I1211 23:58:08.198807   27511 cri.go:89] found id: "c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69"
	I1211 23:58:08.198820   27511 cri.go:89] found id: "47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e"
	I1211 23:58:08.198832   27511 cri.go:89] found id: "49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47"
	I1211 23:58:08.198837   27511 cri.go:89] found id: "0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead"
	I1211 23:58:08.198842   27511 cri.go:89] found id: ""
	I1211 23:58:08.198888   27511 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 23:58:08.218279   27511 out.go:203] 
	W1211 23:58:08.219288   27511 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1211 23:58:08.219306   27511 out.go:285] * 
	* 
	W1211 23:58:08.222539   27511 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 23:58:08.223606   27511 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-758245 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-758245 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-758245 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-758245 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [ce522f41-2ff8-4337-b463-695756c6a3ff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [ce522f41-2ff8-4337-b463-695756c6a3ff] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [ce522f41-2ff8-4337-b463-695756c6a3ff] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002444605s
addons_test.go:969: (dbg) Run:  kubectl --context addons-758245 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 ssh "cat /opt/local-path-provisioner/pvc-7d01c9d1-457a-4b92-a9c5-381c4cde1be1_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-758245 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-758245 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758245 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (238.154664ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 23:57:57.726165   25822 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:57:57.726544   25822 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:57:57.726555   25822 out.go:374] Setting ErrFile to fd 2...
	I1211 23:57:57.726560   25822 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:57:57.726761   25822 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:57:57.727004   25822 mustload.go:66] Loading cluster: addons-758245
	I1211 23:57:57.727304   25822 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:57:57.727322   25822 addons.go:622] checking whether the cluster is paused
	I1211 23:57:57.727409   25822 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:57:57.727421   25822 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:57:57.727774   25822 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:57:57.745110   25822 ssh_runner.go:195] Run: systemctl --version
	I1211 23:57:57.745156   25822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:57:57.761154   25822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:57:57.855468   25822 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:57:57.855594   25822 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:57:57.882888   25822 cri.go:89] found id: "3732d2a3f883872d29a8fa95430ad51d25831fcfea4e55c7896a327d96599bcf"
	I1211 23:57:57.882907   25822 cri.go:89] found id: "eb9ef1b7664a78fa036df7cd1e340b260c6103f0043a248e15b8c4f874624584"
	I1211 23:57:57.882913   25822 cri.go:89] found id: "569f61ef928d0f8bcfb0087922cfff40b1d5e27ac23e6c818e3a160cce92903a"
	I1211 23:57:57.882917   25822 cri.go:89] found id: "c3b2706600ae15c248169ff8e9696d286596bb41798aadd3ed7d5f04ddeaeb09"
	I1211 23:57:57.882921   25822 cri.go:89] found id: "664ac9c24ea7c293c526b1eee38f31561cb464c0f1fb15991e83439838893a9f"
	I1211 23:57:57.882925   25822 cri.go:89] found id: "64e96cfc8140e0ea508cbaa4b1f46f6a77ac38ca8dedd5a6ac707b7d8a935013"
	I1211 23:57:57.882930   25822 cri.go:89] found id: "335878b08bc5952d78cfe80774396d448d7e43cfc4df78fe9b41b92ea329360a"
	I1211 23:57:57.882949   25822 cri.go:89] found id: "6cbca1534843d1584cdf90b2c3a08b630e41bed36ba8785830272efa337811be"
	I1211 23:57:57.882958   25822 cri.go:89] found id: "e70723d59b2b179aeebdf126b5c686f98bc9e70ce305f547af722fcfa5f6570b"
	I1211 23:57:57.882967   25822 cri.go:89] found id: "7e0fe81bd1c04f2678ef0211090a7a638b8f1cde3d56e0048932736929dbfa94"
	I1211 23:57:57.882975   25822 cri.go:89] found id: "a69fa017a8d1a4e1e016c291adabf69f06139db6064f64f4696832dec41770b5"
	I1211 23:57:57.882980   25822 cri.go:89] found id: "8e6d8441c0b88b4af7d7ace2ca8455346456cb30cd8105a796edc75453a87f91"
	I1211 23:57:57.882988   25822 cri.go:89] found id: "aeaae182100e69fb94d63381a76545d31a0d9d27e8df794dcf898674b0374925"
	I1211 23:57:57.882994   25822 cri.go:89] found id: "b246671cb7ebd1332e264290b1d14d25b693cac91d9b52fd3540285bf08677f6"
	I1211 23:57:57.883002   25822 cri.go:89] found id: "533b2e5a9c2d28cc131486b31c27bcf25934c32a40a567192995ee18952d1f74"
	I1211 23:57:57.883015   25822 cri.go:89] found id: "138f9c8dcb50c0ecbf735235efdec9c471f54bceb1f758e03d1f66245eb03e52"
	I1211 23:57:57.883023   25822 cri.go:89] found id: "a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e"
	I1211 23:57:57.883029   25822 cri.go:89] found id: "9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98"
	I1211 23:57:57.883032   25822 cri.go:89] found id: "746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7"
	I1211 23:57:57.883036   25822 cri.go:89] found id: "9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc"
	I1211 23:57:57.883041   25822 cri.go:89] found id: "c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69"
	I1211 23:57:57.883046   25822 cri.go:89] found id: "47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e"
	I1211 23:57:57.883054   25822 cri.go:89] found id: "49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47"
	I1211 23:57:57.883059   25822 cri.go:89] found id: "0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead"
	I1211 23:57:57.883068   25822 cri.go:89] found id: ""
	I1211 23:57:57.883117   25822 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 23:57:57.896999   25822 out.go:203] 
	W1211 23:57:57.898367   25822 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:57:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:57:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1211 23:57:57.898389   25822 out.go:285] * 
	* 
	W1211 23:57:57.901623   25822 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 23:57:57.903223   25822 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-758245 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-5r9hw" [cf07904f-5cff-48d7-a33a-06925d2e0b55] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003711784s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758245 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (233.494375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 23:57:54.890085   25512 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:57:54.890359   25512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:57:54.890370   25512 out.go:374] Setting ErrFile to fd 2...
	I1211 23:57:54.890374   25512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:57:54.890588   25512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:57:54.890829   25512 mustload.go:66] Loading cluster: addons-758245
	I1211 23:57:54.891108   25512 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:57:54.891124   25512 addons.go:622] checking whether the cluster is paused
	I1211 23:57:54.891205   25512 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:57:54.891217   25512 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:57:54.891557   25512 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:57:54.908754   25512 ssh_runner.go:195] Run: systemctl --version
	I1211 23:57:54.908795   25512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:57:54.925500   25512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:57:55.019764   25512 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:57:55.019851   25512 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:57:55.046573   25512 cri.go:89] found id: "3732d2a3f883872d29a8fa95430ad51d25831fcfea4e55c7896a327d96599bcf"
	I1211 23:57:55.046611   25512 cri.go:89] found id: "eb9ef1b7664a78fa036df7cd1e340b260c6103f0043a248e15b8c4f874624584"
	I1211 23:57:55.046617   25512 cri.go:89] found id: "569f61ef928d0f8bcfb0087922cfff40b1d5e27ac23e6c818e3a160cce92903a"
	I1211 23:57:55.046622   25512 cri.go:89] found id: "c3b2706600ae15c248169ff8e9696d286596bb41798aadd3ed7d5f04ddeaeb09"
	I1211 23:57:55.046626   25512 cri.go:89] found id: "664ac9c24ea7c293c526b1eee38f31561cb464c0f1fb15991e83439838893a9f"
	I1211 23:57:55.046632   25512 cri.go:89] found id: "64e96cfc8140e0ea508cbaa4b1f46f6a77ac38ca8dedd5a6ac707b7d8a935013"
	I1211 23:57:55.046637   25512 cri.go:89] found id: "335878b08bc5952d78cfe80774396d448d7e43cfc4df78fe9b41b92ea329360a"
	I1211 23:57:55.046641   25512 cri.go:89] found id: "6cbca1534843d1584cdf90b2c3a08b630e41bed36ba8785830272efa337811be"
	I1211 23:57:55.046646   25512 cri.go:89] found id: "e70723d59b2b179aeebdf126b5c686f98bc9e70ce305f547af722fcfa5f6570b"
	I1211 23:57:55.046662   25512 cri.go:89] found id: "7e0fe81bd1c04f2678ef0211090a7a638b8f1cde3d56e0048932736929dbfa94"
	I1211 23:57:55.046670   25512 cri.go:89] found id: "a69fa017a8d1a4e1e016c291adabf69f06139db6064f64f4696832dec41770b5"
	I1211 23:57:55.046675   25512 cri.go:89] found id: "8e6d8441c0b88b4af7d7ace2ca8455346456cb30cd8105a796edc75453a87f91"
	I1211 23:57:55.046682   25512 cri.go:89] found id: "aeaae182100e69fb94d63381a76545d31a0d9d27e8df794dcf898674b0374925"
	I1211 23:57:55.046688   25512 cri.go:89] found id: "b246671cb7ebd1332e264290b1d14d25b693cac91d9b52fd3540285bf08677f6"
	I1211 23:57:55.046695   25512 cri.go:89] found id: "533b2e5a9c2d28cc131486b31c27bcf25934c32a40a567192995ee18952d1f74"
	I1211 23:57:55.046712   25512 cri.go:89] found id: "138f9c8dcb50c0ecbf735235efdec9c471f54bceb1f758e03d1f66245eb03e52"
	I1211 23:57:55.046725   25512 cri.go:89] found id: "a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e"
	I1211 23:57:55.046731   25512 cri.go:89] found id: "9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98"
	I1211 23:57:55.046735   25512 cri.go:89] found id: "746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7"
	I1211 23:57:55.046739   25512 cri.go:89] found id: "9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc"
	I1211 23:57:55.046743   25512 cri.go:89] found id: "c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69"
	I1211 23:57:55.046747   25512 cri.go:89] found id: "47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e"
	I1211 23:57:55.046751   25512 cri.go:89] found id: "49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47"
	I1211 23:57:55.046756   25512 cri.go:89] found id: "0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead"
	I1211 23:57:55.046760   25512 cri.go:89] found id: ""
	I1211 23:57:55.046830   25512 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 23:57:55.059527   25512 out.go:203] 
	W1211 23:57:55.060547   25512 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:57:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:57:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1211 23:57:55.060563   25512 out.go:285] * 
	* 
	W1211 23:57:55.063394   25512 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 23:57:55.064493   25512 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-758245 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-9jv4p" [b7535ae1-a841-4afd-a05b-f3651f6bd903] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.050100059s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758245 addons disable yakd --alsologtostderr -v=1: exit status 11 (242.820463ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 23:58:14.338609   27888 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:58:14.338750   27888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:14.338761   27888 out.go:374] Setting ErrFile to fd 2...
	I1211 23:58:14.338768   27888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:14.338974   27888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:58:14.339207   27888 mustload.go:66] Loading cluster: addons-758245
	I1211 23:58:14.339583   27888 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:14.339604   27888 addons.go:622] checking whether the cluster is paused
	I1211 23:58:14.339717   27888 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:14.339736   27888 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:58:14.340299   27888 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:58:14.358696   27888 ssh_runner.go:195] Run: systemctl --version
	I1211 23:58:14.358738   27888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:58:14.376299   27888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:58:14.473680   27888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:58:14.473754   27888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:58:14.500630   27888 cri.go:89] found id: "bebb2b9dc0db586ac1f726cd9cf64469bb0451cec9c7113a8be04e821596c916"
	I1211 23:58:14.500649   27888 cri.go:89] found id: "3732d2a3f883872d29a8fa95430ad51d25831fcfea4e55c7896a327d96599bcf"
	I1211 23:58:14.500653   27888 cri.go:89] found id: "eb9ef1b7664a78fa036df7cd1e340b260c6103f0043a248e15b8c4f874624584"
	I1211 23:58:14.500656   27888 cri.go:89] found id: "569f61ef928d0f8bcfb0087922cfff40b1d5e27ac23e6c818e3a160cce92903a"
	I1211 23:58:14.500660   27888 cri.go:89] found id: "c3b2706600ae15c248169ff8e9696d286596bb41798aadd3ed7d5f04ddeaeb09"
	I1211 23:58:14.500665   27888 cri.go:89] found id: "664ac9c24ea7c293c526b1eee38f31561cb464c0f1fb15991e83439838893a9f"
	I1211 23:58:14.500668   27888 cri.go:89] found id: "64e96cfc8140e0ea508cbaa4b1f46f6a77ac38ca8dedd5a6ac707b7d8a935013"
	I1211 23:58:14.500670   27888 cri.go:89] found id: "335878b08bc5952d78cfe80774396d448d7e43cfc4df78fe9b41b92ea329360a"
	I1211 23:58:14.500673   27888 cri.go:89] found id: "6cbca1534843d1584cdf90b2c3a08b630e41bed36ba8785830272efa337811be"
	I1211 23:58:14.500680   27888 cri.go:89] found id: "e70723d59b2b179aeebdf126b5c686f98bc9e70ce305f547af722fcfa5f6570b"
	I1211 23:58:14.500683   27888 cri.go:89] found id: "7e0fe81bd1c04f2678ef0211090a7a638b8f1cde3d56e0048932736929dbfa94"
	I1211 23:58:14.500686   27888 cri.go:89] found id: "a69fa017a8d1a4e1e016c291adabf69f06139db6064f64f4696832dec41770b5"
	I1211 23:58:14.500689   27888 cri.go:89] found id: "8e6d8441c0b88b4af7d7ace2ca8455346456cb30cd8105a796edc75453a87f91"
	I1211 23:58:14.500692   27888 cri.go:89] found id: "aeaae182100e69fb94d63381a76545d31a0d9d27e8df794dcf898674b0374925"
	I1211 23:58:14.500695   27888 cri.go:89] found id: "b246671cb7ebd1332e264290b1d14d25b693cac91d9b52fd3540285bf08677f6"
	I1211 23:58:14.500701   27888 cri.go:89] found id: "533b2e5a9c2d28cc131486b31c27bcf25934c32a40a567192995ee18952d1f74"
	I1211 23:58:14.500707   27888 cri.go:89] found id: "138f9c8dcb50c0ecbf735235efdec9c471f54bceb1f758e03d1f66245eb03e52"
	I1211 23:58:14.500715   27888 cri.go:89] found id: "a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e"
	I1211 23:58:14.500718   27888 cri.go:89] found id: "9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98"
	I1211 23:58:14.500720   27888 cri.go:89] found id: "746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7"
	I1211 23:58:14.500725   27888 cri.go:89] found id: "9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc"
	I1211 23:58:14.500730   27888 cri.go:89] found id: "c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69"
	I1211 23:58:14.500733   27888 cri.go:89] found id: "47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e"
	I1211 23:58:14.500736   27888 cri.go:89] found id: "49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47"
	I1211 23:58:14.500739   27888 cri.go:89] found id: "0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead"
	I1211 23:58:14.500744   27888 cri.go:89] found id: ""
	I1211 23:58:14.500780   27888 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 23:58:14.513171   27888 out.go:203] 
	W1211 23:58:14.514196   27888 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1211 23:58:14.514217   27888 out.go:285] * 
	* 
	W1211 23:58:14.517318   27888 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 23:58:14.518463   27888 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-758245 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.29s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-t4nwx" [c2c64d04-3068-46a6-9981-66ab6ada3e01] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003730288s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-758245 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758245 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (239.235706ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 23:58:08.259621   27586 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:58:08.259774   27586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:08.259786   27586 out.go:374] Setting ErrFile to fd 2...
	I1211 23:58:08.259792   27586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:08.259984   27586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:58:08.260270   27586 mustload.go:66] Loading cluster: addons-758245
	I1211 23:58:08.260653   27586 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:08.260673   27586 addons.go:622] checking whether the cluster is paused
	I1211 23:58:08.260768   27586 config.go:182] Loaded profile config "addons-758245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:58:08.260786   27586 host.go:66] Checking if "addons-758245" exists ...
	I1211 23:58:08.261177   27586 cli_runner.go:164] Run: docker container inspect addons-758245 --format={{.State.Status}}
	I1211 23:58:08.279024   27586 ssh_runner.go:195] Run: systemctl --version
	I1211 23:58:08.279074   27586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758245
	I1211 23:58:08.295144   27586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/addons-758245/id_rsa Username:docker}
	I1211 23:58:08.387323   27586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:58:08.387389   27586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:58:08.413897   27586 cri.go:89] found id: "bebb2b9dc0db586ac1f726cd9cf64469bb0451cec9c7113a8be04e821596c916"
	I1211 23:58:08.413916   27586 cri.go:89] found id: "3732d2a3f883872d29a8fa95430ad51d25831fcfea4e55c7896a327d96599bcf"
	I1211 23:58:08.413922   27586 cri.go:89] found id: "eb9ef1b7664a78fa036df7cd1e340b260c6103f0043a248e15b8c4f874624584"
	I1211 23:58:08.413927   27586 cri.go:89] found id: "569f61ef928d0f8bcfb0087922cfff40b1d5e27ac23e6c818e3a160cce92903a"
	I1211 23:58:08.413931   27586 cri.go:89] found id: "c3b2706600ae15c248169ff8e9696d286596bb41798aadd3ed7d5f04ddeaeb09"
	I1211 23:58:08.413938   27586 cri.go:89] found id: "664ac9c24ea7c293c526b1eee38f31561cb464c0f1fb15991e83439838893a9f"
	I1211 23:58:08.413942   27586 cri.go:89] found id: "64e96cfc8140e0ea508cbaa4b1f46f6a77ac38ca8dedd5a6ac707b7d8a935013"
	I1211 23:58:08.413947   27586 cri.go:89] found id: "335878b08bc5952d78cfe80774396d448d7e43cfc4df78fe9b41b92ea329360a"
	I1211 23:58:08.413952   27586 cri.go:89] found id: "6cbca1534843d1584cdf90b2c3a08b630e41bed36ba8785830272efa337811be"
	I1211 23:58:08.413968   27586 cri.go:89] found id: "e70723d59b2b179aeebdf126b5c686f98bc9e70ce305f547af722fcfa5f6570b"
	I1211 23:58:08.413976   27586 cri.go:89] found id: "7e0fe81bd1c04f2678ef0211090a7a638b8f1cde3d56e0048932736929dbfa94"
	I1211 23:58:08.413981   27586 cri.go:89] found id: "a69fa017a8d1a4e1e016c291adabf69f06139db6064f64f4696832dec41770b5"
	I1211 23:58:08.413990   27586 cri.go:89] found id: "8e6d8441c0b88b4af7d7ace2ca8455346456cb30cd8105a796edc75453a87f91"
	I1211 23:58:08.413995   27586 cri.go:89] found id: "aeaae182100e69fb94d63381a76545d31a0d9d27e8df794dcf898674b0374925"
	I1211 23:58:08.414003   27586 cri.go:89] found id: "b246671cb7ebd1332e264290b1d14d25b693cac91d9b52fd3540285bf08677f6"
	I1211 23:58:08.414010   27586 cri.go:89] found id: "533b2e5a9c2d28cc131486b31c27bcf25934c32a40a567192995ee18952d1f74"
	I1211 23:58:08.414018   27586 cri.go:89] found id: "138f9c8dcb50c0ecbf735235efdec9c471f54bceb1f758e03d1f66245eb03e52"
	I1211 23:58:08.414024   27586 cri.go:89] found id: "a095ba34edcca8613a850ec8d5a90f3095715473e2822c41964ba81e86e9105e"
	I1211 23:58:08.414028   27586 cri.go:89] found id: "9a13d73cda53b861787d14710909e10d5581e282f2c6d2dbb242f5fb05ec6d98"
	I1211 23:58:08.414032   27586 cri.go:89] found id: "746ec0a05954f64ac80eeb1dbf01ec9dd4ac22d092fd795493aca0b5b5c280d7"
	I1211 23:58:08.414040   27586 cri.go:89] found id: "9f62b444d09efdea9ebe57ba2f8a2fd99f2236f002871497da50bc00161b19dc"
	I1211 23:58:08.414044   27586 cri.go:89] found id: "c4b3ad93ba2e011818dea1cb60d344ffede06c8cc4903a77e5b6761edce9fc69"
	I1211 23:58:08.414047   27586 cri.go:89] found id: "47879b4c9f9ddc055b3f0de9c3921fb4a913eab760a903e83e5dd581ea94764e"
	I1211 23:58:08.414052   27586 cri.go:89] found id: "49a709e50508bf30530b49c6277e818b27fea5f381f34cd6af1ea7f1cd9bdb47"
	I1211 23:58:08.414057   27586 cri.go:89] found id: "0ff7242204c8fc4caa1c67bf34b0de78cb4c93c4900341dded6a30d624677ead"
	I1211 23:58:08.414066   27586 cri.go:89] found id: ""
	I1211 23:58:08.414110   27586 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 23:58:08.426504   27586 out.go:203] 
	W1211 23:58:08.427488   27586 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T23:58:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1211 23:58:08.427507   27586 out.go:285] * 
	* 
	W1211 23:58:08.430366   27586 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 23:58:08.431469   27586 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-758245 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image load --daemon kicbase/echo-server:functional-896350 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-896350 image load --daemon kicbase/echo-server:functional-896350 --alsologtostderr: (2.546075757s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-896350 image ls: (2.254225917s)
functional_test.go:461: expected "kicbase/echo-server:functional-896350" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.36s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-286508 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-286508 --output=json --user=testUser: exit status 80 (2.360703962s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d165ccc5-7044-4e8b-8c86-b7322096a525","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-286508 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"2ed2d555-1927-4277-a272-00578f74fa93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-12T00:15:38Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"8201b84e-42af-45ef-8aa5-07850c702e8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-286508 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.36s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.34s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-286508 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-286508 --output=json --user=testUser: exit status 80 (1.338065053s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"81631441-f1c7-43b2-9f2d-4415fa347cf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-286508 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"f631e5fc-fb0c-4eee-8c46-97dd50153284","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-12T00:15:40Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"4486fc80-ad0c-432a-9601-5c33ea93797b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-286508 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.34s)

                                                
                                    
x
+
TestPause/serial/Pause (5.44s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-108809 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-108809 --alsologtostderr -v=5: exit status 80 (2.088173532s)

                                                
                                                
-- stdout --
	* Pausing node pause-108809 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:29:08.010126  219803 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:29:08.010236  219803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:08.010245  219803 out.go:374] Setting ErrFile to fd 2...
	I1212 00:29:08.010249  219803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:08.010453  219803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:29:08.010708  219803 out.go:368] Setting JSON to false
	I1212 00:29:08.010725  219803 mustload.go:66] Loading cluster: pause-108809
	I1212 00:29:08.011071  219803 config.go:182] Loaded profile config "pause-108809": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:29:08.011490  219803 cli_runner.go:164] Run: docker container inspect pause-108809 --format={{.State.Status}}
	I1212 00:29:08.029075  219803 host.go:66] Checking if "pause-108809" exists ...
	I1212 00:29:08.029302  219803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:29:08.085691  219803 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-12 00:29:08.075024564 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:29:08.086262  219803 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-108809 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1212 00:29:08.088290  219803 out.go:179] * Pausing node pause-108809 ... 
	I1212 00:29:08.089292  219803 host.go:66] Checking if "pause-108809" exists ...
	I1212 00:29:08.089574  219803 ssh_runner.go:195] Run: systemctl --version
	I1212 00:29:08.089614  219803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:08.108390  219803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/pause-108809/id_rsa Username:docker}
	I1212 00:29:08.204731  219803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:29:08.216470  219803 pause.go:52] kubelet running: true
	I1212 00:29:08.216548  219803 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:29:08.342485  219803 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:29:08.342588  219803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:29:08.406941  219803 cri.go:89] found id: "6f11db6d211593329a49d20c3ba7717c980176c1fc4a2687c0baf90e40821715"
	I1212 00:29:08.406963  219803 cri.go:89] found id: "6ae66200862240b5d6ceae4c86c30cafe17708f8a56361f62fa78dab5081c690"
	I1212 00:29:08.406967  219803 cri.go:89] found id: "90d794ebd94c1511854fdbd1662d5c751a16b53886cfdfc51cace8fc7f60fb0d"
	I1212 00:29:08.406970  219803 cri.go:89] found id: "41de3ec4d5aa25f201a8754434478b30d08711863b18624c5bb2fdd05571d3af"
	I1212 00:29:08.406973  219803 cri.go:89] found id: "1290ae60df498ba14c46da67189d27fa731430d7a9bb9f046f812c45953c1041"
	I1212 00:29:08.406976  219803 cri.go:89] found id: "8fb46b8b8615b03b0dfec91141c1f50e62e7cb228b01e02fa770caad208e9ab6"
	I1212 00:29:08.406979  219803 cri.go:89] found id: "2b0ab04eb26df9b71b29754cf1bd4e0318459af4d689ec2ec659fa10d7e04532"
	I1212 00:29:08.406982  219803 cri.go:89] found id: ""
	I1212 00:29:08.407023  219803 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:29:08.418149  219803 retry.go:31] will retry after 253.536112ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:29:08Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:29:08.672646  219803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:29:08.686042  219803 pause.go:52] kubelet running: false
	I1212 00:29:08.686098  219803 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:29:08.802018  219803 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:29:08.802135  219803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:29:08.872511  219803 cri.go:89] found id: "6f11db6d211593329a49d20c3ba7717c980176c1fc4a2687c0baf90e40821715"
	I1212 00:29:08.872538  219803 cri.go:89] found id: "6ae66200862240b5d6ceae4c86c30cafe17708f8a56361f62fa78dab5081c690"
	I1212 00:29:08.872545  219803 cri.go:89] found id: "90d794ebd94c1511854fdbd1662d5c751a16b53886cfdfc51cace8fc7f60fb0d"
	I1212 00:29:08.872550  219803 cri.go:89] found id: "41de3ec4d5aa25f201a8754434478b30d08711863b18624c5bb2fdd05571d3af"
	I1212 00:29:08.872555  219803 cri.go:89] found id: "1290ae60df498ba14c46da67189d27fa731430d7a9bb9f046f812c45953c1041"
	I1212 00:29:08.872560  219803 cri.go:89] found id: "8fb46b8b8615b03b0dfec91141c1f50e62e7cb228b01e02fa770caad208e9ab6"
	I1212 00:29:08.872572  219803 cri.go:89] found id: "2b0ab04eb26df9b71b29754cf1bd4e0318459af4d689ec2ec659fa10d7e04532"
	I1212 00:29:08.872577  219803 cri.go:89] found id: ""
	I1212 00:29:08.872629  219803 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:29:08.885241  219803 retry.go:31] will retry after 295.258135ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:29:08Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:29:09.180679  219803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:29:09.192495  219803 pause.go:52] kubelet running: false
	I1212 00:29:09.192545  219803 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:29:09.309643  219803 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:29:09.309752  219803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:29:09.374931  219803 cri.go:89] found id: "6f11db6d211593329a49d20c3ba7717c980176c1fc4a2687c0baf90e40821715"
	I1212 00:29:09.374954  219803 cri.go:89] found id: "6ae66200862240b5d6ceae4c86c30cafe17708f8a56361f62fa78dab5081c690"
	I1212 00:29:09.374960  219803 cri.go:89] found id: "90d794ebd94c1511854fdbd1662d5c751a16b53886cfdfc51cace8fc7f60fb0d"
	I1212 00:29:09.374965  219803 cri.go:89] found id: "41de3ec4d5aa25f201a8754434478b30d08711863b18624c5bb2fdd05571d3af"
	I1212 00:29:09.374970  219803 cri.go:89] found id: "1290ae60df498ba14c46da67189d27fa731430d7a9bb9f046f812c45953c1041"
	I1212 00:29:09.374975  219803 cri.go:89] found id: "8fb46b8b8615b03b0dfec91141c1f50e62e7cb228b01e02fa770caad208e9ab6"
	I1212 00:29:09.374979  219803 cri.go:89] found id: "2b0ab04eb26df9b71b29754cf1bd4e0318459af4d689ec2ec659fa10d7e04532"
	I1212 00:29:09.374983  219803 cri.go:89] found id: ""
	I1212 00:29:09.375030  219803 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:29:09.387033  219803 retry.go:31] will retry after 443.531381ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:29:09Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:29:09.831629  219803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:29:09.843670  219803 pause.go:52] kubelet running: false
	I1212 00:29:09.843731  219803 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:29:09.955143  219803 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:29:09.955197  219803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:29:10.019573  219803 cri.go:89] found id: "6f11db6d211593329a49d20c3ba7717c980176c1fc4a2687c0baf90e40821715"
	I1212 00:29:10.019592  219803 cri.go:89] found id: "6ae66200862240b5d6ceae4c86c30cafe17708f8a56361f62fa78dab5081c690"
	I1212 00:29:10.019596  219803 cri.go:89] found id: "90d794ebd94c1511854fdbd1662d5c751a16b53886cfdfc51cace8fc7f60fb0d"
	I1212 00:29:10.019600  219803 cri.go:89] found id: "41de3ec4d5aa25f201a8754434478b30d08711863b18624c5bb2fdd05571d3af"
	I1212 00:29:10.019603  219803 cri.go:89] found id: "1290ae60df498ba14c46da67189d27fa731430d7a9bb9f046f812c45953c1041"
	I1212 00:29:10.019606  219803 cri.go:89] found id: "8fb46b8b8615b03b0dfec91141c1f50e62e7cb228b01e02fa770caad208e9ab6"
	I1212 00:29:10.019609  219803 cri.go:89] found id: "2b0ab04eb26df9b71b29754cf1bd4e0318459af4d689ec2ec659fa10d7e04532"
	I1212 00:29:10.019611  219803 cri.go:89] found id: ""
	I1212 00:29:10.019650  219803 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:29:10.032820  219803 out.go:203] 
	W1212 00:29:10.033925  219803 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:29:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:29:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:29:10.033938  219803 out.go:285] * 
	* 
	W1212 00:29:10.037606  219803 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:29:10.038691  219803 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-108809 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-108809
helpers_test.go:244: (dbg) docker inspect pause-108809:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce748c2a166450440f29eb9b4897910c66b3b5c31558f1ce772a0691f75f4e39",
	        "Created": "2025-12-12T00:27:50.544827353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 200475,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:27:54.217134342Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/ce748c2a166450440f29eb9b4897910c66b3b5c31558f1ce772a0691f75f4e39/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce748c2a166450440f29eb9b4897910c66b3b5c31558f1ce772a0691f75f4e39/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce748c2a166450440f29eb9b4897910c66b3b5c31558f1ce772a0691f75f4e39/hosts",
	        "LogPath": "/var/lib/docker/containers/ce748c2a166450440f29eb9b4897910c66b3b5c31558f1ce772a0691f75f4e39/ce748c2a166450440f29eb9b4897910c66b3b5c31558f1ce772a0691f75f4e39-json.log",
	        "Name": "/pause-108809",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-108809:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-108809",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce748c2a166450440f29eb9b4897910c66b3b5c31558f1ce772a0691f75f4e39",
	                "LowerDir": "/var/lib/docker/overlay2/8dec569209a2818136b7825930bfa724cadfdaf620b00d20ac0b310a7a92cd76-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8dec569209a2818136b7825930bfa724cadfdaf620b00d20ac0b310a7a92cd76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8dec569209a2818136b7825930bfa724cadfdaf620b00d20ac0b310a7a92cd76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8dec569209a2818136b7825930bfa724cadfdaf620b00d20ac0b310a7a92cd76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-108809",
	                "Source": "/var/lib/docker/volumes/pause-108809/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-108809",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-108809",
	                "name.minikube.sigs.k8s.io": "pause-108809",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b4a1724901e983f1518e5b505e7b809bf4680e7d90731dd8e8730295305f1d69",
	            "SandboxKey": "/var/run/docker/netns/b4a1724901e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32983"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32987"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32985"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32986"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-108809": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "448914ed3e5c7e251bd9b03d41daf8f439407ddc3853ad5127fb050155f96e3f",
	                    "EndpointID": "c5a8110563e812393bbd5bf4c5b49890b76ce4a8bb9cf9901b6522e6f126c9a8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "c2:87:20:9e:52:58",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-108809",
	                        "ce748c2a1664"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-108809 -n pause-108809
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-108809 -n pause-108809: exit status 2 (321.922395ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-108809 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-363822 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:26 UTC │                     │
	│ stop    │ -p scheduled-stop-363822 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:26 UTC │                     │
	│ stop    │ -p scheduled-stop-363822 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:26 UTC │                     │
	│ stop    │ -p scheduled-stop-363822 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:26 UTC │                     │
	│ stop    │ -p scheduled-stop-363822 --cancel-scheduled                                                                       │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:26 UTC │ 12 Dec 25 00:26 UTC │
	│ stop    │ -p scheduled-stop-363822 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:26 UTC │                     │
	│ stop    │ -p scheduled-stop-363822 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:26 UTC │                     │
	│ stop    │ -p scheduled-stop-363822 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:26 UTC │ 12 Dec 25 00:26 UTC │
	│ delete  │ -p scheduled-stop-363822                                                                                          │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:27 UTC │ 12 Dec 25 00:27 UTC │
	│ start   │ -p insufficient-storage-333251 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio  │ insufficient-storage-333251 │ jenkins │ v1.37.0 │ 12 Dec 25 00:27 UTC │                     │
	│ delete  │ -p insufficient-storage-333251                                                                                    │ insufficient-storage-333251 │ jenkins │ v1.37.0 │ 12 Dec 25 00:27 UTC │ 12 Dec 25 00:27 UTC │
	│ start   │ -p pause-108809 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio         │ pause-108809                │ jenkins │ v1.37.0 │ 12 Dec 25 00:27 UTC │ 12 Dec 25 00:29 UTC │
	│ start   │ -p offline-crio-101842 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ offline-crio-101842         │ jenkins │ v1.37.0 │ 12 Dec 25 00:27 UTC │ 12 Dec 25 00:28 UTC │
	│ start   │ -p running-upgrade-299658 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ running-upgrade-299658      │ jenkins │ v1.35.0 │ 12 Dec 25 00:27 UTC │ 12 Dec 25 00:28 UTC │
	│ start   │ -p stopped-upgrade-148693 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ stopped-upgrade-148693      │ jenkins │ v1.35.0 │ 12 Dec 25 00:27 UTC │ 12 Dec 25 00:28 UTC │
	│ start   │ -p running-upgrade-299658 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ running-upgrade-299658      │ jenkins │ v1.37.0 │ 12 Dec 25 00:28 UTC │                     │
	│ stop    │ stopped-upgrade-148693 stop                                                                                       │ stopped-upgrade-148693      │ jenkins │ v1.35.0 │ 12 Dec 25 00:28 UTC │ 12 Dec 25 00:28 UTC │
	│ start   │ -p stopped-upgrade-148693 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ stopped-upgrade-148693      │ jenkins │ v1.37.0 │ 12 Dec 25 00:28 UTC │                     │
	│ delete  │ -p offline-crio-101842                                                                                            │ offline-crio-101842         │ jenkins │ v1.37.0 │ 12 Dec 25 00:28 UTC │ 12 Dec 25 00:28 UTC │
	│ start   │ -p force-systemd-env-551801 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio        │ force-systemd-env-551801    │ jenkins │ v1.37.0 │ 12 Dec 25 00:28 UTC │ 12 Dec 25 00:28 UTC │
	│ delete  │ -p force-systemd-env-551801                                                                                       │ force-systemd-env-551801    │ jenkins │ v1.37.0 │ 12 Dec 25 00:28 UTC │ 12 Dec 25 00:28 UTC │
	│ start   │ -p NoKubernetes-131237 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio     │ NoKubernetes-131237         │ jenkins │ v1.37.0 │ 12 Dec 25 00:28 UTC │                     │
	│ start   │ -p NoKubernetes-131237 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio             │ NoKubernetes-131237         │ jenkins │ v1.37.0 │ 12 Dec 25 00:28 UTC │                     │
	│ start   │ -p pause-108809 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                  │ pause-108809                │ jenkins │ v1.37.0 │ 12 Dec 25 00:29 UTC │ 12 Dec 25 00:29 UTC │
	│ pause   │ -p pause-108809 --alsologtostderr -v=5                                                                            │ pause-108809                │ jenkins │ v1.37.0 │ 12 Dec 25 00:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:29:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:29:00.510197  217893 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:29:00.510492  217893 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:00.510504  217893 out.go:374] Setting ErrFile to fd 2...
	I1212 00:29:00.510512  217893 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:00.510691  217893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:29:00.511116  217893 out.go:368] Setting JSON to false
	I1212 00:29:00.512292  217893 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4286,"bootTime":1765495054,"procs":393,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:29:00.512348  217893 start.go:143] virtualization: kvm guest
	I1212 00:29:00.515309  217893 out.go:179] * [pause-108809] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:29:00.516821  217893 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:29:00.516850  217893 notify.go:221] Checking for updates...
	I1212 00:29:00.519409  217893 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:29:00.520673  217893 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:29:00.521945  217893 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:29:00.523243  217893 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:29:00.524417  217893 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:29:00.525959  217893 config.go:182] Loaded profile config "pause-108809": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:29:00.526514  217893 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:29:00.551033  217893 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:29:00.551200  217893 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:29:00.611792  217893 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:91 SystemTime:2025-12-12 00:29:00.601831402 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:29:00.611896  217893 docker.go:319] overlay module found
	I1212 00:29:00.614977  217893 out.go:179] * Using the docker driver based on existing profile
	I1212 00:29:00.616041  217893 start.go:309] selected driver: docker
	I1212 00:29:00.616054  217893 start.go:927] validating driver "docker" against &{Name:pause-108809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-108809 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:00.616177  217893 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:29:00.616280  217893 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:29:00.673044  217893 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:91 SystemTime:2025-12-12 00:29:00.663176264 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:29:00.673711  217893 cni.go:84] Creating CNI manager for ""
	I1212 00:29:00.673779  217893 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:29:00.673822  217893 start.go:353] cluster config:
	{Name:pause-108809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-108809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:00.675519  217893 out.go:179] * Starting "pause-108809" primary control-plane node in "pause-108809" cluster
	I1212 00:29:00.676654  217893 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:29:00.677828  217893 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:29:00.678933  217893 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:29:00.678963  217893 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 00:29:00.678975  217893 cache.go:65] Caching tarball of preloaded images
	I1212 00:29:00.679034  217893 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:29:00.679073  217893 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:29:00.679089  217893 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 00:29:00.679229  217893 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/config.json ...
	I1212 00:29:00.699131  217893 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:29:00.699151  217893 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:29:00.699170  217893 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:29:00.699204  217893 start.go:360] acquireMachinesLock for pause-108809: {Name:mkbc1656233633c9d25246c0440fc129d4adfcfc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:29:00.699275  217893 start.go:364] duration metric: took 49.479µs to acquireMachinesLock for "pause-108809"
	I1212 00:29:00.699297  217893 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:29:00.699307  217893 fix.go:54] fixHost starting: 
	I1212 00:29:00.699590  217893 cli_runner.go:164] Run: docker container inspect pause-108809 --format={{.State.Status}}
	I1212 00:29:00.718386  217893 fix.go:112] recreateIfNeeded on pause-108809: state=Running err=<nil>
	W1212 00:29:00.718446  217893 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:28:57.132736  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:28:57.133114  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:28:57.631745  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:28:57.632103  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:28:58.132598  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:28:58.132920  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:28:58.632598  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:28:58.632999  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:28:59.132730  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:28:59.133115  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:28:59.632634  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:28:59.633029  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:00.132785  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:00.133254  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:00.632663  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:00.633093  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:01.131702  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:01.132075  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:01.632622  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:01.633002  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:28:59.148189  217460 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 00:28:59.148369  217460 start.go:159] libmachine.API.Create for "NoKubernetes-131237" (driver="docker")
	I1212 00:28:59.148393  217460 client.go:173] LocalClient.Create starting
	I1212 00:28:59.148463  217460 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem
	I1212 00:28:59.148507  217460 main.go:143] libmachine: Decoding PEM data...
	I1212 00:28:59.148524  217460 main.go:143] libmachine: Parsing certificate...
	I1212 00:28:59.148577  217460 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem
	I1212 00:28:59.148621  217460 main.go:143] libmachine: Decoding PEM data...
	I1212 00:28:59.148630  217460 main.go:143] libmachine: Parsing certificate...
	I1212 00:28:59.148957  217460 cli_runner.go:164] Run: docker network inspect NoKubernetes-131237 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:28:59.164592  217460 cli_runner.go:211] docker network inspect NoKubernetes-131237 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:28:59.164661  217460 network_create.go:284] running [docker network inspect NoKubernetes-131237] to gather additional debugging logs...
	I1212 00:28:59.164687  217460 cli_runner.go:164] Run: docker network inspect NoKubernetes-131237
	W1212 00:28:59.180841  217460 cli_runner.go:211] docker network inspect NoKubernetes-131237 returned with exit code 1
	I1212 00:28:59.180867  217460 network_create.go:287] error running [docker network inspect NoKubernetes-131237]: docker network inspect NoKubernetes-131237: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network NoKubernetes-131237 not found
	I1212 00:28:59.180881  217460 network_create.go:289] output of [docker network inspect NoKubernetes-131237]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network NoKubernetes-131237 not found
	
	** /stderr **
	I1212 00:28:59.180964  217460 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:28:59.197900  217460 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ad72354fdcf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:1e:a7:00:22:62} reservation:<nil>}
	I1212 00:28:59.198405  217460 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-18bf2e8051c8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:8e:8f:a6:9d:0c} reservation:<nil>}
	I1212 00:28:59.198970  217460 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e5251ddf0a35 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:5a:cf:fd:1d:2a} reservation:<nil>}
	I1212 00:28:59.199656  217460 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-448914ed3e5c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:9e:2c:f3:9e:e5:cd} reservation:<nil>}
	I1212 00:28:59.200437  217460 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb0d00}
	I1212 00:28:59.200457  217460 network_create.go:124] attempt to create docker network NoKubernetes-131237 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1212 00:28:59.200511  217460 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-131237 NoKubernetes-131237
	I1212 00:28:59.244845  217460 network_create.go:108] docker network NoKubernetes-131237 192.168.85.0/24 created
	I1212 00:28:59.244870  217460 kic.go:121] calculated static IP "192.168.85.2" for the "NoKubernetes-131237" container
	I1212 00:28:59.244932  217460 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:28:59.262019  217460 cli_runner.go:164] Run: docker volume create NoKubernetes-131237 --label name.minikube.sigs.k8s.io=NoKubernetes-131237 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:28:59.278868  217460 oci.go:103] Successfully created a docker volume NoKubernetes-131237
	I1212 00:28:59.278934  217460 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-131237-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-131237 --entrypoint /usr/bin/test -v NoKubernetes-131237:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 00:28:59.650939  217460 oci.go:107] Successfully prepared a docker volume NoKubernetes-131237
	I1212 00:28:59.651015  217460 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:28:59.651029  217460 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:28:59.651096  217460 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v NoKubernetes-131237:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:29:03.481928  217460 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v NoKubernetes-131237:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.830783516s)
	I1212 00:29:03.481962  217460 kic.go:203] duration metric: took 3.83092807s to extract preloaded images to volume ...
	W1212 00:29:03.482066  217460 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 00:29:03.482105  217460 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 00:29:03.482156  217460 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:29:03.541907  217460 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-131237 --name NoKubernetes-131237 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-131237 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-131237 --network NoKubernetes-131237 --ip 192.168.85.2 --volume NoKubernetes-131237:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 00:29:03.810922  217460 cli_runner.go:164] Run: docker container inspect NoKubernetes-131237 --format={{.State.Running}}
	I1212 00:29:03.829633  217460 cli_runner.go:164] Run: docker container inspect NoKubernetes-131237 --format={{.State.Status}}
	I1212 00:29:03.848531  217460 cli_runner.go:164] Run: docker exec NoKubernetes-131237 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:29:03.894533  217460 oci.go:144] the created container "NoKubernetes-131237" has a running status.
	I1212 00:29:03.894559  217460 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/NoKubernetes-131237/id_rsa...
	I1212 00:29:03.936539  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/NoKubernetes-131237/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1212 00:29:03.936589  217460 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/NoKubernetes-131237/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:28:59.952157  210380 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 00:28:59.952198  210380 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 00:29:00.720781  217893 out.go:252] * Updating the running docker "pause-108809" container ...
	I1212 00:29:00.720824  217893 machine.go:94] provisionDockerMachine start ...
	I1212 00:29:00.720888  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:00.737642  217893 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:00.737881  217893 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1212 00:29:00.737898  217893 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:29:00.869243  217893 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-108809
	
	I1212 00:29:00.869271  217893 ubuntu.go:182] provisioning hostname "pause-108809"
	I1212 00:29:00.869335  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:00.889063  217893 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:00.889385  217893 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1212 00:29:00.889406  217893 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-108809 && echo "pause-108809" | sudo tee /etc/hostname
	I1212 00:29:01.037851  217893 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-108809
	
	I1212 00:29:01.037942  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:01.055541  217893 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:01.055784  217893 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1212 00:29:01.055808  217893 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-108809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-108809/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-108809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:29:01.192012  217893 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:29:01.192042  217893 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:29:01.192078  217893 ubuntu.go:190] setting up certificates
	I1212 00:29:01.192091  217893 provision.go:84] configureAuth start
	I1212 00:29:01.192160  217893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-108809
	I1212 00:29:01.214046  217893 provision.go:143] copyHostCerts
	I1212 00:29:01.214123  217893 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:29:01.214150  217893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:29:01.214234  217893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:29:01.214377  217893 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:29:01.214392  217893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:29:01.214433  217893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:29:01.214536  217893 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:29:01.214548  217893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:29:01.214585  217893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:29:01.214653  217893 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.pause-108809 san=[127.0.0.1 192.168.76.2 localhost minikube pause-108809]
	I1212 00:29:01.382615  217893 provision.go:177] copyRemoteCerts
	I1212 00:29:01.382687  217893 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:29:01.382738  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:01.401963  217893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/pause-108809/id_rsa Username:docker}
	I1212 00:29:01.498881  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:29:01.534801  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1212 00:29:01.560499  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:29:01.579971  217893 provision.go:87] duration metric: took 387.859751ms to configureAuth
	I1212 00:29:01.580004  217893 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:29:01.580224  217893 config.go:182] Loaded profile config "pause-108809": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:29:01.580326  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:01.598302  217893 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:01.598576  217893 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1212 00:29:01.598593  217893 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:29:03.651414  217893 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:29:03.651440  217893 machine.go:97] duration metric: took 2.930607352s to provisionDockerMachine
	I1212 00:29:03.651484  217893 start.go:293] postStartSetup for "pause-108809" (driver="docker")
	I1212 00:29:03.651499  217893 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:29:03.651566  217893 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:29:03.651618  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:03.675163  217893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/pause-108809/id_rsa Username:docker}
	I1212 00:29:03.775427  217893 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:29:03.779259  217893 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:29:03.779289  217893 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:29:03.779302  217893 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:29:03.779352  217893 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:29:03.779449  217893 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:29:03.779585  217893 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:29:03.787499  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:29:03.805383  217893 start.go:296] duration metric: took 153.883196ms for postStartSetup
	I1212 00:29:03.805464  217893 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:29:03.805533  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:03.824625  217893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/pause-108809/id_rsa Username:docker}
	I1212 00:29:03.921857  217893 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:29:03.926494  217893 fix.go:56] duration metric: took 3.227159032s for fixHost
	I1212 00:29:03.926520  217893 start.go:83] releasing machines lock for "pause-108809", held for 3.22723166s
	I1212 00:29:03.926583  217893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-108809
	I1212 00:29:03.949897  217893 ssh_runner.go:195] Run: cat /version.json
	I1212 00:29:03.949951  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:03.949953  217893 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:29:03.950040  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:03.970832  217893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/pause-108809/id_rsa Username:docker}
	I1212 00:29:03.972618  217893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/pause-108809/id_rsa Username:docker}
	I1212 00:29:04.123574  217893 ssh_runner.go:195] Run: systemctl --version
	I1212 00:29:04.129747  217893 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:29:04.163281  217893 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:29:04.167790  217893 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:29:04.167849  217893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:29:04.175319  217893 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:29:04.175334  217893 start.go:496] detecting cgroup driver to use...
	I1212 00:29:04.175359  217893 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:29:04.175406  217893 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:29:04.188821  217893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:29:04.200606  217893 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:29:04.200650  217893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:29:04.213950  217893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:29:04.225430  217893 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:29:04.327195  217893 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:29:04.431152  217893 docker.go:234] disabling docker service ...
	I1212 00:29:04.431241  217893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:29:04.445380  217893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:29:04.457394  217893 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:29:04.559229  217893 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:29:04.664005  217893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:29:04.675793  217893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:29:04.689160  217893 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:29:04.689213  217893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:04.698028  217893 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:29:04.698079  217893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:04.706358  217893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:04.714185  217893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:04.723434  217893 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:29:04.730809  217893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:04.738843  217893 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:04.746461  217893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:04.754381  217893 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:29:04.760927  217893 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:29:04.767500  217893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:04.866190  217893 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:29:05.044918  217893 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:29:05.044988  217893 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:29:05.049164  217893 start.go:564] Will wait 60s for crictl version
	I1212 00:29:05.049223  217893 ssh_runner.go:195] Run: which crictl
	I1212 00:29:05.053120  217893 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:29:05.087221  217893 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:29:05.087304  217893 ssh_runner.go:195] Run: crio --version
	I1212 00:29:05.122326  217893 ssh_runner.go:195] Run: crio --version
	I1212 00:29:05.153846  217893 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:29:05.154882  217893 cli_runner.go:164] Run: docker network inspect pause-108809 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:29:05.173354  217893 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 00:29:05.177702  217893 kubeadm.go:884] updating cluster {Name:pause-108809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-108809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:29:05.177836  217893 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:29:05.177878  217893 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:29:05.210846  217893 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:29:05.210870  217893 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:29:05.210924  217893 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:29:05.238639  217893 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:29:05.238656  217893 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:29:05.238663  217893 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1212 00:29:05.238763  217893 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-108809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-108809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:29:05.238824  217893 ssh_runner.go:195] Run: crio config
	I1212 00:29:05.283292  217893 cni.go:84] Creating CNI manager for ""
	I1212 00:29:05.283315  217893 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:29:05.283332  217893 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:29:05.283360  217893 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-108809 NodeName:pause-108809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:29:05.283596  217893 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-108809"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:29:05.283656  217893 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:29:05.291507  217893 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:29:05.291571  217893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:29:05.299760  217893 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1212 00:29:05.312552  217893 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:29:05.325335  217893 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1212 00:29:05.337304  217893 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:29:05.340834  217893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:05.460687  217893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:29:05.473322  217893 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809 for IP: 192.168.76.2
	I1212 00:29:05.473345  217893 certs.go:195] generating shared ca certs ...
	I1212 00:29:05.473364  217893 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:05.473552  217893 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:29:05.473617  217893 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:29:05.473633  217893 certs.go:257] generating profile certs ...
	I1212 00:29:05.473751  217893 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/client.key
	I1212 00:29:05.473827  217893 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/apiserver.key.2cc90d5b
	I1212 00:29:05.473885  217893 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/proxy-client.key
	I1212 00:29:05.474021  217893 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:29:05.474064  217893 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:29:05.474079  217893 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:29:05.474126  217893 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:29:05.474163  217893 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:29:05.474200  217893 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:29:05.474258  217893 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:29:05.474978  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:29:05.492396  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:29:05.509412  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:29:05.526071  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:29:05.542188  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 00:29:05.558841  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:29:05.575073  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:29:05.591093  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:29:05.607409  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:29:05.623679  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:29:05.640656  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:29:05.656694  217893 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:29:05.667954  217893 ssh_runner.go:195] Run: openssl version
	I1212 00:29:05.673764  217893 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:29:05.680689  217893 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:29:05.687489  217893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:29:05.690792  217893 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:29:05.690835  217893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:29:05.724598  217893 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:29:05.731516  217893 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:29:05.738178  217893 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:29:05.744899  217893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:29:05.748224  217893 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:29:05.748274  217893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:29:05.781242  217893 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:29:05.788292  217893 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:05.794963  217893 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:29:05.801858  217893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:05.805649  217893 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:05.805697  217893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:05.839708  217893 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:29:05.846806  217893 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:29:05.850268  217893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:29:05.883429  217893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:29:05.916721  217893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:29:05.949659  217893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:29:05.983765  217893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:29:06.017264  217893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:29:06.050770  217893 kubeadm.go:401] StartCluster: {Name:pause-108809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-108809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:06.050883  217893 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:29:06.050934  217893 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:29:06.077159  217893 cri.go:89] found id: "6f11db6d211593329a49d20c3ba7717c980176c1fc4a2687c0baf90e40821715"
	I1212 00:29:06.077190  217893 cri.go:89] found id: "6ae66200862240b5d6ceae4c86c30cafe17708f8a56361f62fa78dab5081c690"
	I1212 00:29:06.077198  217893 cri.go:89] found id: "90d794ebd94c1511854fdbd1662d5c751a16b53886cfdfc51cace8fc7f60fb0d"
	I1212 00:29:06.077209  217893 cri.go:89] found id: "41de3ec4d5aa25f201a8754434478b30d08711863b18624c5bb2fdd05571d3af"
	I1212 00:29:06.077215  217893 cri.go:89] found id: "1290ae60df498ba14c46da67189d27fa731430d7a9bb9f046f812c45953c1041"
	I1212 00:29:06.077220  217893 cri.go:89] found id: "8fb46b8b8615b03b0dfec91141c1f50e62e7cb228b01e02fa770caad208e9ab6"
	I1212 00:29:06.077225  217893 cri.go:89] found id: "2b0ab04eb26df9b71b29754cf1bd4e0318459af4d689ec2ec659fa10d7e04532"
	I1212 00:29:06.077233  217893 cri.go:89] found id: ""
	I1212 00:29:06.077269  217893 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 00:29:06.088228  217893 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:29:06Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:29:06.088297  217893 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:29:06.096028  217893 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:29:06.096044  217893 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:29:06.096080  217893 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:29:06.103445  217893 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:29:06.104071  217893 kubeconfig.go:125] found "pause-108809" server: "https://192.168.76.2:8443"
	I1212 00:29:06.104938  217893 kapi.go:59] client config for pause-108809: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/client.key", CAFile:"/home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:29:06.105305  217893 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 00:29:06.105318  217893 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 00:29:06.105323  217893 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 00:29:06.105332  217893 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 00:29:06.105340  217893 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 00:29:06.105666  217893 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:29:06.113179  217893 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1212 00:29:06.113202  217893 kubeadm.go:602] duration metric: took 17.15275ms to restartPrimaryControlPlane
	I1212 00:29:06.113211  217893 kubeadm.go:403] duration metric: took 62.446229ms to StartCluster
	I1212 00:29:06.113230  217893 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:06.113287  217893 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:29:06.114607  217893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:06.114866  217893 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:29:06.114931  217893 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:29:06.115129  217893 config.go:182] Loaded profile config "pause-108809": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:29:06.117342  217893 out.go:179] * Verifying Kubernetes components...
	I1212 00:29:06.117342  217893 out.go:179] * Enabled addons: 
	I1212 00:29:02.132743  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:02.133157  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:02.632640  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:02.633029  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:03.132717  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:03.133176  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:03.632631  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:03.633068  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:04.132763  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:04.133147  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:04.631777  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:04.632157  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:05.132694  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:05.133087  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:05.632619  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:05.632929  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:06.132601  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:06.132940  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:06.632609  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:06.632958  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:06.118465  217893 addons.go:530] duration metric: took 3.537801ms for enable addons: enabled=[]
	I1212 00:29:06.118515  217893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:06.221097  217893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:29:06.233111  217893 node_ready.go:35] waiting up to 6m0s for node "pause-108809" to be "Ready" ...
	I1212 00:29:06.240103  217893 node_ready.go:49] node "pause-108809" is "Ready"
	I1212 00:29:06.240128  217893 node_ready.go:38] duration metric: took 6.987375ms for node "pause-108809" to be "Ready" ...
	I1212 00:29:06.240143  217893 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:29:06.240193  217893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:29:06.251017  217893 api_server.go:72] duration metric: took 136.117435ms to wait for apiserver process to appear ...
	I1212 00:29:06.251040  217893 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:29:06.251058  217893 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:29:06.254692  217893 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 00:29:06.255486  217893 api_server.go:141] control plane version: v1.34.2
	I1212 00:29:06.255508  217893 api_server.go:131] duration metric: took 4.462067ms to wait for apiserver health ...
	I1212 00:29:06.255517  217893 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:29:06.258652  217893 system_pods.go:59] 7 kube-system pods found
	I1212 00:29:06.258690  217893 system_pods.go:61] "coredns-66bc5c9577-b5lpn" [c278674e-b431-4e23-9b7f-e64bf1141aa8] Running
	I1212 00:29:06.258699  217893 system_pods.go:61] "etcd-pause-108809" [c085d719-7321-4e80-b0be-6a716ee272ca] Running
	I1212 00:29:06.258705  217893 system_pods.go:61] "kindnet-bvmdc" [fdac71f9-d66e-4e40-a014-65e866e5bd85] Running
	I1212 00:29:06.258716  217893 system_pods.go:61] "kube-apiserver-pause-108809" [b2462188-0872-4737-8642-8057f3e1e7c1] Running
	I1212 00:29:06.258726  217893 system_pods.go:61] "kube-controller-manager-pause-108809" [0eb5f9f9-3288-40ac-8adc-eeba09986127] Running
	I1212 00:29:06.258735  217893 system_pods.go:61] "kube-proxy-8psfp" [dd013fac-10d7-44eb-849b-2ba423554488] Running
	I1212 00:29:06.258740  217893 system_pods.go:61] "kube-scheduler-pause-108809" [6f071b53-9b77-4fda-8174-73eeecfa738d] Running
	I1212 00:29:06.258749  217893 system_pods.go:74] duration metric: took 3.222438ms to wait for pod list to return data ...
	I1212 00:29:06.258757  217893 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:29:06.260539  217893 default_sa.go:45] found service account: "default"
	I1212 00:29:06.260555  217893 default_sa.go:55] duration metric: took 1.790262ms for default service account to be created ...
	I1212 00:29:06.260562  217893 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:29:06.262717  217893 system_pods.go:86] 7 kube-system pods found
	I1212 00:29:06.262737  217893 system_pods.go:89] "coredns-66bc5c9577-b5lpn" [c278674e-b431-4e23-9b7f-e64bf1141aa8] Running
	I1212 00:29:06.262743  217893 system_pods.go:89] "etcd-pause-108809" [c085d719-7321-4e80-b0be-6a716ee272ca] Running
	I1212 00:29:06.262746  217893 system_pods.go:89] "kindnet-bvmdc" [fdac71f9-d66e-4e40-a014-65e866e5bd85] Running
	I1212 00:29:06.262750  217893 system_pods.go:89] "kube-apiserver-pause-108809" [b2462188-0872-4737-8642-8057f3e1e7c1] Running
	I1212 00:29:06.262753  217893 system_pods.go:89] "kube-controller-manager-pause-108809" [0eb5f9f9-3288-40ac-8adc-eeba09986127] Running
	I1212 00:29:06.262756  217893 system_pods.go:89] "kube-proxy-8psfp" [dd013fac-10d7-44eb-849b-2ba423554488] Running
	I1212 00:29:06.262759  217893 system_pods.go:89] "kube-scheduler-pause-108809" [6f071b53-9b77-4fda-8174-73eeecfa738d] Running
	I1212 00:29:06.262764  217893 system_pods.go:126] duration metric: took 2.198215ms to wait for k8s-apps to be running ...
	I1212 00:29:06.262770  217893 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:29:06.262805  217893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:29:06.274417  217893 system_svc.go:56] duration metric: took 11.640503ms WaitForService to wait for kubelet
	I1212 00:29:06.274438  217893 kubeadm.go:587] duration metric: took 159.541412ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:29:06.274455  217893 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:29:06.276144  217893 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:29:06.276162  217893 node_conditions.go:123] node cpu capacity is 8
	I1212 00:29:06.276174  217893 node_conditions.go:105] duration metric: took 1.715367ms to run NodePressure ...
	I1212 00:29:06.276184  217893 start.go:242] waiting for startup goroutines ...
	I1212 00:29:06.276190  217893 start.go:247] waiting for cluster config update ...
	I1212 00:29:06.276206  217893 start.go:256] writing updated cluster config ...
	I1212 00:29:06.276518  217893 ssh_runner.go:195] Run: rm -f paused
	I1212 00:29:06.279868  217893 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:29:06.280528  217893 kapi.go:59] client config for pause-108809: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/client.key", CAFile:"/home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:29:06.282539  217893 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b5lpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:06.285632  217893 pod_ready.go:94] pod "coredns-66bc5c9577-b5lpn" is "Ready"
	I1212 00:29:06.285647  217893 pod_ready.go:86] duration metric: took 3.086348ms for pod "coredns-66bc5c9577-b5lpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:06.287173  217893 pod_ready.go:83] waiting for pod "etcd-pause-108809" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:06.290142  217893 pod_ready.go:94] pod "etcd-pause-108809" is "Ready"
	I1212 00:29:06.290157  217893 pod_ready.go:86] duration metric: took 2.969103ms for pod "etcd-pause-108809" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:06.291718  217893 pod_ready.go:83] waiting for pod "kube-apiserver-pause-108809" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:06.294698  217893 pod_ready.go:94] pod "kube-apiserver-pause-108809" is "Ready"
	I1212 00:29:06.294714  217893 pod_ready.go:86] duration metric: took 2.98025ms for pod "kube-apiserver-pause-108809" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:06.296155  217893 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-108809" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:06.683423  217893 pod_ready.go:94] pod "kube-controller-manager-pause-108809" is "Ready"
	I1212 00:29:06.683445  217893 pod_ready.go:86] duration metric: took 387.274617ms for pod "kube-controller-manager-pause-108809" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:06.884285  217893 pod_ready.go:83] waiting for pod "kube-proxy-8psfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:07.284117  217893 pod_ready.go:94] pod "kube-proxy-8psfp" is "Ready"
	I1212 00:29:07.284149  217893 pod_ready.go:86] duration metric: took 399.835854ms for pod "kube-proxy-8psfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:07.483682  217893 pod_ready.go:83] waiting for pod "kube-scheduler-pause-108809" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:07.884442  217893 pod_ready.go:94] pod "kube-scheduler-pause-108809" is "Ready"
	I1212 00:29:07.884468  217893 pod_ready.go:86] duration metric: took 400.76454ms for pod "kube-scheduler-pause-108809" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:07.884506  217893 pod_ready.go:40] duration metric: took 1.604612429s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:29:07.926040  217893 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 00:29:07.929006  217893 out.go:179] * Done! kubectl is now configured to use "pause-108809" cluster and "default" namespace by default
	I1212 00:29:03.970255  217460 cli_runner.go:164] Run: docker container inspect NoKubernetes-131237 --format={{.State.Status}}
	I1212 00:29:03.989538  217460 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:29:03.989560  217460 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-131237 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:29:04.032837  217460 cli_runner.go:164] Run: docker container inspect NoKubernetes-131237 --format={{.State.Status}}
	I1212 00:29:04.054742  217460 machine.go:94] provisionDockerMachine start ...
	I1212 00:29:04.054848  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:04.075138  217460 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:04.075432  217460 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1212 00:29:04.075448  217460 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:29:04.076132  217460 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58226->127.0.0.1:33008: read: connection reset by peer
	I1212 00:29:07.208030  217460 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-131237
	
	I1212 00:29:07.208057  217460 ubuntu.go:182] provisioning hostname "NoKubernetes-131237"
	I1212 00:29:07.208127  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:07.225933  217460 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:07.226138  217460 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1212 00:29:07.226150  217460 main.go:143] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-131237 && echo "NoKubernetes-131237" | sudo tee /etc/hostname
	I1212 00:29:07.365351  217460 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-131237
	
	I1212 00:29:07.365436  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:07.384422  217460 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:07.384657  217460 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1212 00:29:07.384676  217460 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-131237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-131237/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-131237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:29:07.514616  217460 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:29:07.514643  217460 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:29:07.514688  217460 ubuntu.go:190] setting up certificates
	I1212 00:29:07.514699  217460 provision.go:84] configureAuth start
	I1212 00:29:07.514748  217460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-131237
	I1212 00:29:07.532462  217460 provision.go:143] copyHostCerts
	I1212 00:29:07.532506  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:29:07.532536  217460 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:29:07.532542  217460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:29:07.532606  217460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:29:07.532688  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:29:07.532715  217460 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:29:07.532725  217460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:29:07.532763  217460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:29:07.532852  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:29:07.532872  217460 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:29:07.532882  217460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:29:07.532922  217460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:29:07.533012  217460 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-131237 san=[127.0.0.1 192.168.85.2 NoKubernetes-131237 localhost minikube]
	I1212 00:29:07.705906  217460 provision.go:177] copyRemoteCerts
	I1212 00:29:07.705964  217460 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:29:07.705996  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:07.723834  217460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/NoKubernetes-131237/id_rsa Username:docker}
	I1212 00:29:07.817989  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:29:07.818057  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:29:07.836508  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:29:07.836570  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 00:29:07.852872  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:29:07.852930  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:29:07.868808  217460 provision.go:87] duration metric: took 354.084635ms to configureAuth
	I1212 00:29:07.868832  217460 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:29:07.868974  217460 config.go:182] Loaded profile config "NoKubernetes-131237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:29:07.869074  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:07.887094  217460 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:07.887376  217460 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1212 00:29:07.887400  217460 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:29:08.166499  217460 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:29:08.166523  217460 machine.go:97] duration metric: took 4.11176039s to provisionDockerMachine
	I1212 00:29:08.166532  217460 client.go:176] duration metric: took 9.018135071s to LocalClient.Create
	I1212 00:29:08.166566  217460 start.go:167] duration metric: took 9.018197315s to libmachine.API.Create "NoKubernetes-131237"
	I1212 00:29:08.166576  217460 start.go:293] postStartSetup for "NoKubernetes-131237" (driver="docker")
	I1212 00:29:08.166585  217460 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:29:08.166640  217460 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:29:08.166680  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:08.184020  217460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/NoKubernetes-131237/id_rsa Username:docker}
	I1212 00:29:08.279387  217460 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:29:08.282796  217460 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:29:08.282832  217460 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:29:08.282844  217460 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:29:08.282896  217460 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:29:08.283001  217460 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:29:08.283014  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> /etc/ssl/certs/145032.pem
	I1212 00:29:08.283126  217460 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:29:08.289988  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:29:08.308710  217460 start.go:296] duration metric: took 142.124054ms for postStartSetup
	I1212 00:29:08.309066  217460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-131237
	I1212 00:29:08.327283  217460 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/config.json ...
	I1212 00:29:08.327539  217460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:29:08.327581  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:08.344943  217460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/NoKubernetes-131237/id_rsa Username:docker}
	I1212 00:29:08.438082  217460 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:29:08.442270  217460 start.go:128] duration metric: took 9.295654265s to createHost
	I1212 00:29:08.442289  217460 start.go:83] releasing machines lock for "NoKubernetes-131237", held for 9.295752357s
	I1212 00:29:08.442353  217460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-131237
	I1212 00:29:08.460120  217460 ssh_runner.go:195] Run: cat /version.json
	I1212 00:29:08.460174  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:08.460227  217460 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:29:08.460293  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:08.478806  217460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/NoKubernetes-131237/id_rsa Username:docker}
	I1212 00:29:08.479242  217460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/NoKubernetes-131237/id_rsa Username:docker}
	I1212 00:29:08.570176  217460 ssh_runner.go:195] Run: systemctl --version
	I1212 00:29:08.623548  217460 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:29:08.658361  217460 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:29:08.662793  217460 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:29:08.662844  217460 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:29:08.688794  217460 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:29:08.688812  217460 start.go:496] detecting cgroup driver to use...
	I1212 00:29:08.688845  217460 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:29:08.688885  217460 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:29:08.704803  217460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:29:08.716450  217460 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:29:08.716523  217460 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:29:08.735637  217460 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:29:08.751204  217460 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:29:08.832288  217460 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:29:08.923428  217460 docker.go:234] disabling docker service ...
	I1212 00:29:08.923514  217460 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:29:08.942295  217460 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:29:08.953946  217460 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:29:04.952561  210380 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 00:29:04.952608  210380 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 00:29:09.032921  217460 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:29:09.111048  217460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:29:09.123147  217460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:29:09.136347  217460 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:29:09.136399  217460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:09.145875  217460 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:29:09.145926  217460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:09.154026  217460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:09.162056  217460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:09.170068  217460 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:29:09.177387  217460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:09.185287  217460 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:09.197899  217460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:09.206481  217460 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:29:09.213428  217460 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:29:09.220606  217460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:09.304423  217460 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:29:09.448327  217460 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:29:09.448386  217460 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:29:09.452235  217460 start.go:564] Will wait 60s for crictl version
	I1212 00:29:09.452281  217460 ssh_runner.go:195] Run: which crictl
	I1212 00:29:09.455648  217460 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:29:09.478093  217460 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:29:09.478163  217460 ssh_runner.go:195] Run: crio --version
	I1212 00:29:09.505347  217460 ssh_runner.go:195] Run: crio --version
	I1212 00:29:09.532377  217460 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	
	
	==> CRI-O <==
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.955625251Z" level=info msg="RDT not available in the host system"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.955635087Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.95633417Z" level=info msg="Conmon does support the --sync option"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.956348314Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.956360216Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.957007881Z" level=info msg="Conmon does support the --sync option"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.957021031Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.960321289Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.960341873Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.960882912Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.961268519Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.961324399Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.040240515Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-b5lpn Namespace:kube-system ID:17f145c15981f8081147c43a895ff73e90d1598f3aaa25231eb4b32749b591a1 UID:c278674e-b431-4e23-9b7f-e64bf1141aa8 NetNS:/var/run/netns/eee1c12f-be94-4b38-9208-17595f9ef972 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002904b0}] Aliases:map[]}"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.040395585Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-b5lpn for CNI network kindnet (type=ptp)"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.040790398Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.040819271Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.040880523Z" level=info msg="Create NRI interface"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.041018951Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.041033541Z" level=info msg="runtime interface created"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.041042634Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.041048351Z" level=info msg="runtime interface starting up..."
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.041052751Z" level=info msg="starting plugins..."
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.041074328Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.041341116Z" level=info msg="No systemd watchdog enabled"
	Dec 12 00:29:05 pause-108809 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	6f11db6d21159       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago       Running             coredns                   0                   17f145c15981f       coredns-66bc5c9577-b5lpn               kube-system
	6ae6620086224       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   54 seconds ago       Running             kube-proxy                0                   3c2b18addbe2a       kube-proxy-8psfp                       kube-system
	90d794ebd94c1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   54 seconds ago       Running             kindnet-cni               0                   8d0c6d868ddba       kindnet-bvmdc                          kube-system
	41de3ec4d5aa2       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Running             kube-scheduler            0                   0201d0420119a       kube-scheduler-pause-108809            kube-system
	1290ae60df498       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   About a minute ago   Running             kube-apiserver            0                   89af789de5631       kube-apiserver-pause-108809            kube-system
	8fb46b8b8615b       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   About a minute ago   Running             kube-controller-manager   0                   1117a810b70b3       kube-controller-manager-pause-108809   kube-system
	2b0ab04eb26df       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Running             etcd                      0                   6dbb936e68013       etcd-pause-108809                      kube-system
	
	
	==> coredns [6f11db6d211593329a49d20c3ba7717c980176c1fc4a2687c0baf90e40821715] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40976 - 53731 "HINFO IN 7448751840446251034.6306811712129152444. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.126159454s
	
	
	==> describe nodes <==
	Name:               pause-108809
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-108809
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=pause-108809
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_28_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:28:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-108809
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:29:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:29:03 +0000   Fri, 12 Dec 2025 00:28:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:29:03 +0000   Fri, 12 Dec 2025 00:28:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:29:03 +0000   Fri, 12 Dec 2025 00:28:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:29:03 +0000   Fri, 12 Dec 2025 00:28:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-108809
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                399173fd-1be6-4e2a-990f-f867e8b96a97
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://Unknown
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-b5lpn                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     55s
	  kube-system                 etcd-pause-108809                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         60s
	  kube-system                 kindnet-bvmdc                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-pause-108809             250m (3%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-pause-108809    200m (2%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-8psfp                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-pause-108809             100m (1%)     0 (0%)      0 (0%)           0 (0%)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 53s                kube-proxy       
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node pause-108809 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)  kubelet          Node pause-108809 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 65s)  kubelet          Node pause-108809 status is now: NodeHasSufficientPID
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s                kubelet          Node pause-108809 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s                kubelet          Node pause-108809 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s                kubelet          Node pause-108809 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                node-controller  Node pause-108809 event: Registered Node pause-108809 in Controller
	  Normal  NodeReady                14s                kubelet          Node pause-108809 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [2b0ab04eb26df9b71b29754cf1bd4e0318459af4d689ec2ec659fa10d7e04532] <==
	{"level":"warn","ts":"2025-12-12T00:28:07.858694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.871265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.880429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.889337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.897804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.907906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.915147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.924279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.933603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.942749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.951923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.960950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.970278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.978103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.987366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.994791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.002509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.011048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.019756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.033375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.046739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.050428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.060566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.069566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.141706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55882","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:29:11 up  1:11,  0 user,  load average: 3.25, 1.99, 1.30
	Linux pause-108809 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [90d794ebd94c1511854fdbd1662d5c751a16b53886cfdfc51cace8fc7f60fb0d] <==
	I1212 00:28:17.084683       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 00:28:17.084944       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1212 00:28:17.085079       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:28:17.085101       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:28:17.085126       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:28:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:28:17.288689       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:28:17.288727       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:28:17.288739       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:28:17.288916       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1212 00:28:47.289371       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1212 00:28:47.289381       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1212 00:28:47.289380       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1212 00:28:47.303766       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1212 00:28:48.788916       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:28:48.788941       1 metrics.go:72] Registering metrics
	I1212 00:28:48.788997       1 controller.go:711] "Syncing nftables rules"
	I1212 00:28:57.289808       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 00:28:57.289858       1 main.go:301] handling current node
	I1212 00:29:07.290992       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 00:29:07.291020       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1290ae60df498ba14c46da67189d27fa731430d7a9bb9f046f812c45953c1041] <==
	I1212 00:28:08.759551       1 policy_source.go:240] refreshing policies
	E1212 00:28:08.765528       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1212 00:28:08.814570       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 00:28:08.815934       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:28:08.816037       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1212 00:28:08.829106       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:28:08.830066       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 00:28:08.941405       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:28:09.617246       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 00:28:09.621346       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 00:28:09.621422       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 00:28:10.029890       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:28:10.070553       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:28:10.120307       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 00:28:10.131716       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1212 00:28:10.133140       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 00:28:10.139320       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:28:10.278515       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:28:11.180098       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:28:11.206279       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 00:28:11.235456       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 00:28:15.978654       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 00:28:16.081264       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:28:16.092986       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:28:16.383034       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8fb46b8b8615b03b0dfec91141c1f50e62e7cb228b01e02fa770caad208e9ab6] <==
	I1212 00:28:15.272718       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 00:28:15.272809       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-108809"
	I1212 00:28:15.272863       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1212 00:28:15.273896       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 00:28:15.273923       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1212 00:28:15.273970       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 00:28:15.274221       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 00:28:15.274316       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 00:28:15.275083       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 00:28:15.275115       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 00:28:15.276276       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1212 00:28:15.276294       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 00:28:15.276884       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 00:28:15.279101       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 00:28:15.279109       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1212 00:28:15.279152       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 00:28:15.280352       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1212 00:28:15.280465       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 00:28:15.280711       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 00:28:15.281452       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 00:28:15.287648       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 00:28:15.287667       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 00:28:15.287675       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 00:28:15.298821       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 00:29:00.279726       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6ae66200862240b5d6ceae4c86c30cafe17708f8a56361f62fa78dab5081c690] <==
	I1212 00:28:16.894241       1 server_linux.go:53] "Using iptables proxy"
	I1212 00:28:16.975533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 00:28:17.075881       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 00:28:17.075926       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1212 00:28:17.076039       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:28:17.107025       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:28:17.107236       1 server_linux.go:132] "Using iptables Proxier"
	I1212 00:28:17.115329       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:28:17.115790       1 server.go:527] "Version info" version="v1.34.2"
	I1212 00:28:17.115905       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:28:17.118247       1 config.go:200] "Starting service config controller"
	I1212 00:28:17.118252       1 config.go:309] "Starting node config controller"
	I1212 00:28:17.118275       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:28:17.118279       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:28:17.118286       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 00:28:17.118298       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:28:17.118300       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:28:17.118304       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:28:17.118305       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:28:17.218952       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 00:28:17.218981       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 00:28:17.219001       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [41de3ec4d5aa25f201a8754434478b30d08711863b18624c5bb2fdd05571d3af] <==
	E1212 00:28:08.697646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 00:28:08.697906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 00:28:08.697983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 00:28:08.698108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 00:28:08.698208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 00:28:08.698268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 00:28:08.698300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 00:28:08.698390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 00:28:08.698517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 00:28:08.698602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 00:28:08.698732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 00:28:08.698981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 00:28:08.699703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 00:28:08.700090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 00:28:08.700164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 00:28:08.700833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 00:28:09.507945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 00:28:09.535962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 00:28:09.563245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1212 00:28:09.690957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 00:28:09.700200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 00:28:09.727402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 00:28:09.788331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 00:28:09.851647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1212 00:28:12.592645       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 00:28:57 pause-108809 kubelet[1312]: I1212 00:28:57.563366    1312 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 12 00:28:57 pause-108809 kubelet[1312]: I1212 00:28:57.657217    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df62z\" (UniqueName: \"kubernetes.io/projected/c278674e-b431-4e23-9b7f-e64bf1141aa8-kube-api-access-df62z\") pod \"coredns-66bc5c9577-b5lpn\" (UID: \"c278674e-b431-4e23-9b7f-e64bf1141aa8\") " pod="kube-system/coredns-66bc5c9577-b5lpn"
	Dec 12 00:28:57 pause-108809 kubelet[1312]: I1212 00:28:57.657267    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c278674e-b431-4e23-9b7f-e64bf1141aa8-config-volume\") pod \"coredns-66bc5c9577-b5lpn\" (UID: \"c278674e-b431-4e23-9b7f-e64bf1141aa8\") " pod="kube-system/coredns-66bc5c9577-b5lpn"
	Dec 12 00:28:58 pause-108809 kubelet[1312]: I1212 00:28:58.454283    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-b5lpn" podStartSLOduration=42.454261559 podStartE2EDuration="42.454261559s" podCreationTimestamp="2025-12-12 00:28:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:28:58.454060167 +0000 UTC m=+47.420702923" watchObservedRunningTime="2025-12-12 00:28:58.454261559 +0000 UTC m=+47.420904311"
	Dec 12 00:29:02 pause-108809 kubelet[1312]: W1212 00:29:02.447834    1312 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 12 00:29:02 pause-108809 kubelet[1312]: E1212 00:29:02.447927    1312 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 12 00:29:02 pause-108809 kubelet[1312]: E1212 00:29:02.447976    1312 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 12 00:29:02 pause-108809 kubelet[1312]: E1212 00:29:02.447992    1312 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 12 00:29:02 pause-108809 kubelet[1312]: W1212 00:29:02.548279    1312 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 12 00:29:02 pause-108809 kubelet[1312]: W1212 00:29:02.689090    1312 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 12 00:29:02 pause-108809 kubelet[1312]: W1212 00:29:02.910433    1312 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 12 00:29:03 pause-108809 kubelet[1312]: E1212 00:29:03.028997    1312 log.go:32] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 12 00:29:03 pause-108809 kubelet[1312]: E1212 00:29:03.316703    1312 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 12 00:29:03 pause-108809 kubelet[1312]: E1212 00:29:03.316819    1312 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 12 00:29:03 pause-108809 kubelet[1312]: E1212 00:29:03.316843    1312 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 12 00:29:03 pause-108809 kubelet[1312]: E1212 00:29:03.316861    1312 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 12 00:29:03 pause-108809 kubelet[1312]: W1212 00:29:03.336941    1312 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 12 00:29:03 pause-108809 kubelet[1312]: E1212 00:29:03.448737    1312 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 12 00:29:03 pause-108809 kubelet[1312]: E1212 00:29:03.448790    1312 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 12 00:29:03 pause-108809 kubelet[1312]: E1212 00:29:03.448807    1312 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 12 00:29:08 pause-108809 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 00:29:08 pause-108809 kubelet[1312]: I1212 00:29:08.324231    1312 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 12 00:29:08 pause-108809 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 00:29:08 pause-108809 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:29:08 pause-108809 systemd[1]: kubelet.service: Consumed 2.055s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-108809 -n pause-108809
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-108809 -n pause-108809: exit status 2 (340.229711ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-108809 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-108809
helpers_test.go:244: (dbg) docker inspect pause-108809:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce748c2a166450440f29eb9b4897910c66b3b5c31558f1ce772a0691f75f4e39",
	        "Created": "2025-12-12T00:27:50.544827353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 200475,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:27:54.217134342Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/ce748c2a166450440f29eb9b4897910c66b3b5c31558f1ce772a0691f75f4e39/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce748c2a166450440f29eb9b4897910c66b3b5c31558f1ce772a0691f75f4e39/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce748c2a166450440f29eb9b4897910c66b3b5c31558f1ce772a0691f75f4e39/hosts",
	        "LogPath": "/var/lib/docker/containers/ce748c2a166450440f29eb9b4897910c66b3b5c31558f1ce772a0691f75f4e39/ce748c2a166450440f29eb9b4897910c66b3b5c31558f1ce772a0691f75f4e39-json.log",
	        "Name": "/pause-108809",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-108809:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-108809",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce748c2a166450440f29eb9b4897910c66b3b5c31558f1ce772a0691f75f4e39",
	                "LowerDir": "/var/lib/docker/overlay2/8dec569209a2818136b7825930bfa724cadfdaf620b00d20ac0b310a7a92cd76-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8dec569209a2818136b7825930bfa724cadfdaf620b00d20ac0b310a7a92cd76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8dec569209a2818136b7825930bfa724cadfdaf620b00d20ac0b310a7a92cd76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8dec569209a2818136b7825930bfa724cadfdaf620b00d20ac0b310a7a92cd76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-108809",
	                "Source": "/var/lib/docker/volumes/pause-108809/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-108809",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-108809",
	                "name.minikube.sigs.k8s.io": "pause-108809",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b4a1724901e983f1518e5b505e7b809bf4680e7d90731dd8e8730295305f1d69",
	            "SandboxKey": "/var/run/docker/netns/b4a1724901e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32983"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32987"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32985"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32986"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-108809": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "448914ed3e5c7e251bd9b03d41daf8f439407ddc3853ad5127fb050155f96e3f",
	                    "EndpointID": "c5a8110563e812393bbd5bf4c5b49890b76ce4a8bb9cf9901b6522e6f126c9a8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "c2:87:20:9e:52:58",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-108809",
	                        "ce748c2a1664"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-108809 -n pause-108809
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-108809 -n pause-108809: exit status 2 (324.16265ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-108809 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-363822 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:26 UTC │                     │
	│ stop    │ -p scheduled-stop-363822 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:26 UTC │                     │
	│ stop    │ -p scheduled-stop-363822 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:26 UTC │                     │
	│ stop    │ -p scheduled-stop-363822 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:26 UTC │                     │
	│ stop    │ -p scheduled-stop-363822 --cancel-scheduled                                                                       │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:26 UTC │ 12 Dec 25 00:26 UTC │
	│ stop    │ -p scheduled-stop-363822 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:26 UTC │                     │
	│ stop    │ -p scheduled-stop-363822 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:26 UTC │                     │
	│ stop    │ -p scheduled-stop-363822 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:26 UTC │ 12 Dec 25 00:26 UTC │
	│ delete  │ -p scheduled-stop-363822                                                                                          │ scheduled-stop-363822       │ jenkins │ v1.37.0 │ 12 Dec 25 00:27 UTC │ 12 Dec 25 00:27 UTC │
	│ start   │ -p insufficient-storage-333251 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio  │ insufficient-storage-333251 │ jenkins │ v1.37.0 │ 12 Dec 25 00:27 UTC │                     │
	│ delete  │ -p insufficient-storage-333251                                                                                    │ insufficient-storage-333251 │ jenkins │ v1.37.0 │ 12 Dec 25 00:27 UTC │ 12 Dec 25 00:27 UTC │
	│ start   │ -p pause-108809 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio         │ pause-108809                │ jenkins │ v1.37.0 │ 12 Dec 25 00:27 UTC │ 12 Dec 25 00:29 UTC │
	│ start   │ -p offline-crio-101842 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ offline-crio-101842         │ jenkins │ v1.37.0 │ 12 Dec 25 00:27 UTC │ 12 Dec 25 00:28 UTC │
	│ start   │ -p running-upgrade-299658 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ running-upgrade-299658      │ jenkins │ v1.35.0 │ 12 Dec 25 00:27 UTC │ 12 Dec 25 00:28 UTC │
	│ start   │ -p stopped-upgrade-148693 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ stopped-upgrade-148693      │ jenkins │ v1.35.0 │ 12 Dec 25 00:27 UTC │ 12 Dec 25 00:28 UTC │
	│ start   │ -p running-upgrade-299658 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ running-upgrade-299658      │ jenkins │ v1.37.0 │ 12 Dec 25 00:28 UTC │                     │
	│ stop    │ stopped-upgrade-148693 stop                                                                                       │ stopped-upgrade-148693      │ jenkins │ v1.35.0 │ 12 Dec 25 00:28 UTC │ 12 Dec 25 00:28 UTC │
	│ start   │ -p stopped-upgrade-148693 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ stopped-upgrade-148693      │ jenkins │ v1.37.0 │ 12 Dec 25 00:28 UTC │                     │
	│ delete  │ -p offline-crio-101842                                                                                            │ offline-crio-101842         │ jenkins │ v1.37.0 │ 12 Dec 25 00:28 UTC │ 12 Dec 25 00:28 UTC │
	│ start   │ -p force-systemd-env-551801 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio        │ force-systemd-env-551801    │ jenkins │ v1.37.0 │ 12 Dec 25 00:28 UTC │ 12 Dec 25 00:28 UTC │
	│ delete  │ -p force-systemd-env-551801                                                                                       │ force-systemd-env-551801    │ jenkins │ v1.37.0 │ 12 Dec 25 00:28 UTC │ 12 Dec 25 00:28 UTC │
	│ start   │ -p NoKubernetes-131237 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio     │ NoKubernetes-131237         │ jenkins │ v1.37.0 │ 12 Dec 25 00:28 UTC │                     │
	│ start   │ -p NoKubernetes-131237 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio             │ NoKubernetes-131237         │ jenkins │ v1.37.0 │ 12 Dec 25 00:28 UTC │                     │
	│ start   │ -p pause-108809 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                  │ pause-108809                │ jenkins │ v1.37.0 │ 12 Dec 25 00:29 UTC │ 12 Dec 25 00:29 UTC │
	│ pause   │ -p pause-108809 --alsologtostderr -v=5                                                                            │ pause-108809                │ jenkins │ v1.37.0 │ 12 Dec 25 00:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:29:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:29:00.510197  217893 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:29:00.510492  217893 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:00.510504  217893 out.go:374] Setting ErrFile to fd 2...
	I1212 00:29:00.510512  217893 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:00.510691  217893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:29:00.511116  217893 out.go:368] Setting JSON to false
	I1212 00:29:00.512292  217893 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4286,"bootTime":1765495054,"procs":393,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:29:00.512348  217893 start.go:143] virtualization: kvm guest
	I1212 00:29:00.515309  217893 out.go:179] * [pause-108809] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:29:00.516821  217893 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:29:00.516850  217893 notify.go:221] Checking for updates...
	I1212 00:29:00.519409  217893 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:29:00.520673  217893 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:29:00.521945  217893 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:29:00.523243  217893 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:29:00.524417  217893 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:29:00.525959  217893 config.go:182] Loaded profile config "pause-108809": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:29:00.526514  217893 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:29:00.551033  217893 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:29:00.551200  217893 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:29:00.611792  217893 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:91 SystemTime:2025-12-12 00:29:00.601831402 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:29:00.611896  217893 docker.go:319] overlay module found
	I1212 00:29:00.614977  217893 out.go:179] * Using the docker driver based on existing profile
	I1212 00:29:00.616041  217893 start.go:309] selected driver: docker
	I1212 00:29:00.616054  217893 start.go:927] validating driver "docker" against &{Name:pause-108809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-108809 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:00.616177  217893 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:29:00.616280  217893 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:29:00.673044  217893 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:91 SystemTime:2025-12-12 00:29:00.663176264 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:29:00.673711  217893 cni.go:84] Creating CNI manager for ""
	I1212 00:29:00.673779  217893 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:29:00.673822  217893 start.go:353] cluster config:
	{Name:pause-108809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-108809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:00.675519  217893 out.go:179] * Starting "pause-108809" primary control-plane node in "pause-108809" cluster
	I1212 00:29:00.676654  217893 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:29:00.677828  217893 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:29:00.678933  217893 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:29:00.678963  217893 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 00:29:00.678975  217893 cache.go:65] Caching tarball of preloaded images
	I1212 00:29:00.679034  217893 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:29:00.679073  217893 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:29:00.679089  217893 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 00:29:00.679229  217893 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/config.json ...
	I1212 00:29:00.699131  217893 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:29:00.699151  217893 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:29:00.699170  217893 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:29:00.699204  217893 start.go:360] acquireMachinesLock for pause-108809: {Name:mkbc1656233633c9d25246c0440fc129d4adfcfc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:29:00.699275  217893 start.go:364] duration metric: took 49.479µs to acquireMachinesLock for "pause-108809"
	I1212 00:29:00.699297  217893 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:29:00.699307  217893 fix.go:54] fixHost starting: 
	I1212 00:29:00.699590  217893 cli_runner.go:164] Run: docker container inspect pause-108809 --format={{.State.Status}}
	I1212 00:29:00.718386  217893 fix.go:112] recreateIfNeeded on pause-108809: state=Running err=<nil>
	W1212 00:29:00.718446  217893 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:28:57.132736  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:28:57.133114  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:28:57.631745  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:28:57.632103  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:28:58.132598  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:28:58.132920  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:28:58.632598  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:28:58.632999  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:28:59.132730  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:28:59.133115  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:28:59.632634  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:28:59.633029  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:00.132785  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:00.133254  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:00.632663  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:00.633093  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:01.131702  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:01.132075  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:01.632622  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:01.633002  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:28:59.148189  217460 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 00:28:59.148369  217460 start.go:159] libmachine.API.Create for "NoKubernetes-131237" (driver="docker")
	I1212 00:28:59.148393  217460 client.go:173] LocalClient.Create starting
	I1212 00:28:59.148463  217460 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem
	I1212 00:28:59.148507  217460 main.go:143] libmachine: Decoding PEM data...
	I1212 00:28:59.148524  217460 main.go:143] libmachine: Parsing certificate...
	I1212 00:28:59.148577  217460 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem
	I1212 00:28:59.148621  217460 main.go:143] libmachine: Decoding PEM data...
	I1212 00:28:59.148630  217460 main.go:143] libmachine: Parsing certificate...
	I1212 00:28:59.148957  217460 cli_runner.go:164] Run: docker network inspect NoKubernetes-131237 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:28:59.164592  217460 cli_runner.go:211] docker network inspect NoKubernetes-131237 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:28:59.164661  217460 network_create.go:284] running [docker network inspect NoKubernetes-131237] to gather additional debugging logs...
	I1212 00:28:59.164687  217460 cli_runner.go:164] Run: docker network inspect NoKubernetes-131237
	W1212 00:28:59.180841  217460 cli_runner.go:211] docker network inspect NoKubernetes-131237 returned with exit code 1
	I1212 00:28:59.180867  217460 network_create.go:287] error running [docker network inspect NoKubernetes-131237]: docker network inspect NoKubernetes-131237: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network NoKubernetes-131237 not found
	I1212 00:28:59.180881  217460 network_create.go:289] output of [docker network inspect NoKubernetes-131237]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network NoKubernetes-131237 not found
	
	** /stderr **
	I1212 00:28:59.180964  217460 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:28:59.197900  217460 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ad72354fdcf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:1e:a7:00:22:62} reservation:<nil>}
	I1212 00:28:59.198405  217460 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-18bf2e8051c8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:8e:8f:a6:9d:0c} reservation:<nil>}
	I1212 00:28:59.198970  217460 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e5251ddf0a35 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:5a:cf:fd:1d:2a} reservation:<nil>}
	I1212 00:28:59.199656  217460 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-448914ed3e5c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:9e:2c:f3:9e:e5:cd} reservation:<nil>}
	I1212 00:28:59.200437  217460 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb0d00}
	I1212 00:28:59.200457  217460 network_create.go:124] attempt to create docker network NoKubernetes-131237 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1212 00:28:59.200511  217460 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-131237 NoKubernetes-131237
	I1212 00:28:59.244845  217460 network_create.go:108] docker network NoKubernetes-131237 192.168.85.0/24 created
	I1212 00:28:59.244870  217460 kic.go:121] calculated static IP "192.168.85.2" for the "NoKubernetes-131237" container
	I1212 00:28:59.244932  217460 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:28:59.262019  217460 cli_runner.go:164] Run: docker volume create NoKubernetes-131237 --label name.minikube.sigs.k8s.io=NoKubernetes-131237 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:28:59.278868  217460 oci.go:103] Successfully created a docker volume NoKubernetes-131237
	I1212 00:28:59.278934  217460 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-131237-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-131237 --entrypoint /usr/bin/test -v NoKubernetes-131237:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 00:28:59.650939  217460 oci.go:107] Successfully prepared a docker volume NoKubernetes-131237
	I1212 00:28:59.651015  217460 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:28:59.651029  217460 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:28:59.651096  217460 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v NoKubernetes-131237:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:29:03.481928  217460 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v NoKubernetes-131237:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.830783516s)
	I1212 00:29:03.481962  217460 kic.go:203] duration metric: took 3.83092807s to extract preloaded images to volume ...
	W1212 00:29:03.482066  217460 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 00:29:03.482105  217460 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 00:29:03.482156  217460 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:29:03.541907  217460 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-131237 --name NoKubernetes-131237 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-131237 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-131237 --network NoKubernetes-131237 --ip 192.168.85.2 --volume NoKubernetes-131237:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 00:29:03.810922  217460 cli_runner.go:164] Run: docker container inspect NoKubernetes-131237 --format={{.State.Running}}
	I1212 00:29:03.829633  217460 cli_runner.go:164] Run: docker container inspect NoKubernetes-131237 --format={{.State.Status}}
	I1212 00:29:03.848531  217460 cli_runner.go:164] Run: docker exec NoKubernetes-131237 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:29:03.894533  217460 oci.go:144] the created container "NoKubernetes-131237" has a running status.
	I1212 00:29:03.894559  217460 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/NoKubernetes-131237/id_rsa...
	I1212 00:29:03.936539  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/NoKubernetes-131237/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1212 00:29:03.936589  217460 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/NoKubernetes-131237/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:28:59.952157  210380 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 00:28:59.952198  210380 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 00:29:00.720781  217893 out.go:252] * Updating the running docker "pause-108809" container ...
	I1212 00:29:00.720824  217893 machine.go:94] provisionDockerMachine start ...
	I1212 00:29:00.720888  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:00.737642  217893 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:00.737881  217893 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1212 00:29:00.737898  217893 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:29:00.869243  217893 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-108809
	
	I1212 00:29:00.869271  217893 ubuntu.go:182] provisioning hostname "pause-108809"
	I1212 00:29:00.869335  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:00.889063  217893 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:00.889385  217893 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1212 00:29:00.889406  217893 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-108809 && echo "pause-108809" | sudo tee /etc/hostname
	I1212 00:29:01.037851  217893 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-108809
	
	I1212 00:29:01.037942  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:01.055541  217893 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:01.055784  217893 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1212 00:29:01.055808  217893 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-108809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-108809/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-108809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:29:01.192012  217893 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:29:01.192042  217893 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:29:01.192078  217893 ubuntu.go:190] setting up certificates
	I1212 00:29:01.192091  217893 provision.go:84] configureAuth start
	I1212 00:29:01.192160  217893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-108809
	I1212 00:29:01.214046  217893 provision.go:143] copyHostCerts
	I1212 00:29:01.214123  217893 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:29:01.214150  217893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:29:01.214234  217893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:29:01.214377  217893 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:29:01.214392  217893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:29:01.214433  217893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:29:01.214536  217893 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:29:01.214548  217893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:29:01.214585  217893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:29:01.214653  217893 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.pause-108809 san=[127.0.0.1 192.168.76.2 localhost minikube pause-108809]
	I1212 00:29:01.382615  217893 provision.go:177] copyRemoteCerts
	I1212 00:29:01.382687  217893 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:29:01.382738  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:01.401963  217893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/pause-108809/id_rsa Username:docker}
	I1212 00:29:01.498881  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:29:01.534801  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1212 00:29:01.560499  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:29:01.579971  217893 provision.go:87] duration metric: took 387.859751ms to configureAuth
	I1212 00:29:01.580004  217893 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:29:01.580224  217893 config.go:182] Loaded profile config "pause-108809": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:29:01.580326  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:01.598302  217893 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:01.598576  217893 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1212 00:29:01.598593  217893 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:29:03.651414  217893 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:29:03.651440  217893 machine.go:97] duration metric: took 2.930607352s to provisionDockerMachine
	I1212 00:29:03.651484  217893 start.go:293] postStartSetup for "pause-108809" (driver="docker")
	I1212 00:29:03.651499  217893 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:29:03.651566  217893 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:29:03.651618  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:03.675163  217893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/pause-108809/id_rsa Username:docker}
	I1212 00:29:03.775427  217893 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:29:03.779259  217893 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:29:03.779289  217893 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:29:03.779302  217893 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:29:03.779352  217893 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:29:03.779449  217893 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:29:03.779585  217893 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:29:03.787499  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:29:03.805383  217893 start.go:296] duration metric: took 153.883196ms for postStartSetup
	I1212 00:29:03.805464  217893 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:29:03.805533  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:03.824625  217893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/pause-108809/id_rsa Username:docker}
	I1212 00:29:03.921857  217893 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:29:03.926494  217893 fix.go:56] duration metric: took 3.227159032s for fixHost
	I1212 00:29:03.926520  217893 start.go:83] releasing machines lock for "pause-108809", held for 3.22723166s
	I1212 00:29:03.926583  217893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-108809
	I1212 00:29:03.949897  217893 ssh_runner.go:195] Run: cat /version.json
	I1212 00:29:03.949951  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:03.949953  217893 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:29:03.950040  217893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-108809
	I1212 00:29:03.970832  217893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/pause-108809/id_rsa Username:docker}
	I1212 00:29:03.972618  217893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/pause-108809/id_rsa Username:docker}
	I1212 00:29:04.123574  217893 ssh_runner.go:195] Run: systemctl --version
	I1212 00:29:04.129747  217893 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:29:04.163281  217893 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:29:04.167790  217893 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:29:04.167849  217893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:29:04.175319  217893 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:29:04.175334  217893 start.go:496] detecting cgroup driver to use...
	I1212 00:29:04.175359  217893 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:29:04.175406  217893 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:29:04.188821  217893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:29:04.200606  217893 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:29:04.200650  217893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:29:04.213950  217893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:29:04.225430  217893 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:29:04.327195  217893 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:29:04.431152  217893 docker.go:234] disabling docker service ...
	I1212 00:29:04.431241  217893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:29:04.445380  217893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:29:04.457394  217893 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:29:04.559229  217893 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:29:04.664005  217893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:29:04.675793  217893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:29:04.689160  217893 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:29:04.689213  217893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:04.698028  217893 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:29:04.698079  217893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:04.706358  217893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:04.714185  217893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:04.723434  217893 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:29:04.730809  217893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:04.738843  217893 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:04.746461  217893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:04.754381  217893 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:29:04.760927  217893 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:29:04.767500  217893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:04.866190  217893 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:29:05.044918  217893 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:29:05.044988  217893 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:29:05.049164  217893 start.go:564] Will wait 60s for crictl version
	I1212 00:29:05.049223  217893 ssh_runner.go:195] Run: which crictl
	I1212 00:29:05.053120  217893 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:29:05.087221  217893 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:29:05.087304  217893 ssh_runner.go:195] Run: crio --version
	I1212 00:29:05.122326  217893 ssh_runner.go:195] Run: crio --version
	I1212 00:29:05.153846  217893 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:29:05.154882  217893 cli_runner.go:164] Run: docker network inspect pause-108809 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:29:05.173354  217893 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 00:29:05.177702  217893 kubeadm.go:884] updating cluster {Name:pause-108809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-108809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:29:05.177836  217893 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:29:05.177878  217893 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:29:05.210846  217893 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:29:05.210870  217893 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:29:05.210924  217893 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:29:05.238639  217893 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:29:05.238656  217893 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:29:05.238663  217893 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1212 00:29:05.238763  217893 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-108809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-108809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:29:05.238824  217893 ssh_runner.go:195] Run: crio config
	I1212 00:29:05.283292  217893 cni.go:84] Creating CNI manager for ""
	I1212 00:29:05.283315  217893 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:29:05.283332  217893 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:29:05.283360  217893 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-108809 NodeName:pause-108809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:29:05.283596  217893 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-108809"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:29:05.283656  217893 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:29:05.291507  217893 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:29:05.291571  217893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:29:05.299760  217893 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1212 00:29:05.312552  217893 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:29:05.325335  217893 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1212 00:29:05.337304  217893 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:29:05.340834  217893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:05.460687  217893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:29:05.473322  217893 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809 for IP: 192.168.76.2
	I1212 00:29:05.473345  217893 certs.go:195] generating shared ca certs ...
	I1212 00:29:05.473364  217893 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:05.473552  217893 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:29:05.473617  217893 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:29:05.473633  217893 certs.go:257] generating profile certs ...
	I1212 00:29:05.473751  217893 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/client.key
	I1212 00:29:05.473827  217893 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/apiserver.key.2cc90d5b
	I1212 00:29:05.473885  217893 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/proxy-client.key
	I1212 00:29:05.474021  217893 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:29:05.474064  217893 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:29:05.474079  217893 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:29:05.474126  217893 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:29:05.474163  217893 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:29:05.474200  217893 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:29:05.474258  217893 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:29:05.474978  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:29:05.492396  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:29:05.509412  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:29:05.526071  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:29:05.542188  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 00:29:05.558841  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:29:05.575073  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:29:05.591093  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:29:05.607409  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:29:05.623679  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:29:05.640656  217893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:29:05.656694  217893 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:29:05.667954  217893 ssh_runner.go:195] Run: openssl version
	I1212 00:29:05.673764  217893 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:29:05.680689  217893 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:29:05.687489  217893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:29:05.690792  217893 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:29:05.690835  217893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:29:05.724598  217893 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:29:05.731516  217893 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:29:05.738178  217893 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:29:05.744899  217893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:29:05.748224  217893 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:29:05.748274  217893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:29:05.781242  217893 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:29:05.788292  217893 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:05.794963  217893 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:29:05.801858  217893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:05.805649  217893 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:05.805697  217893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:05.839708  217893 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:29:05.846806  217893 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:29:05.850268  217893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:29:05.883429  217893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:29:05.916721  217893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:29:05.949659  217893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:29:05.983765  217893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:29:06.017264  217893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:29:06.050770  217893 kubeadm.go:401] StartCluster: {Name:pause-108809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-108809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:06.050883  217893 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:29:06.050934  217893 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:29:06.077159  217893 cri.go:89] found id: "6f11db6d211593329a49d20c3ba7717c980176c1fc4a2687c0baf90e40821715"
	I1212 00:29:06.077190  217893 cri.go:89] found id: "6ae66200862240b5d6ceae4c86c30cafe17708f8a56361f62fa78dab5081c690"
	I1212 00:29:06.077198  217893 cri.go:89] found id: "90d794ebd94c1511854fdbd1662d5c751a16b53886cfdfc51cace8fc7f60fb0d"
	I1212 00:29:06.077209  217893 cri.go:89] found id: "41de3ec4d5aa25f201a8754434478b30d08711863b18624c5bb2fdd05571d3af"
	I1212 00:29:06.077215  217893 cri.go:89] found id: "1290ae60df498ba14c46da67189d27fa731430d7a9bb9f046f812c45953c1041"
	I1212 00:29:06.077220  217893 cri.go:89] found id: "8fb46b8b8615b03b0dfec91141c1f50e62e7cb228b01e02fa770caad208e9ab6"
	I1212 00:29:06.077225  217893 cri.go:89] found id: "2b0ab04eb26df9b71b29754cf1bd4e0318459af4d689ec2ec659fa10d7e04532"
	I1212 00:29:06.077233  217893 cri.go:89] found id: ""
	I1212 00:29:06.077269  217893 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 00:29:06.088228  217893 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:29:06Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:29:06.088297  217893 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:29:06.096028  217893 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:29:06.096044  217893 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:29:06.096080  217893 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:29:06.103445  217893 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:29:06.104071  217893 kubeconfig.go:125] found "pause-108809" server: "https://192.168.76.2:8443"
	I1212 00:29:06.104938  217893 kapi.go:59] client config for pause-108809: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/client.key", CAFile:"/home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:29:06.105305  217893 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 00:29:06.105318  217893 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 00:29:06.105323  217893 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 00:29:06.105332  217893 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 00:29:06.105340  217893 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 00:29:06.105666  217893 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:29:06.113179  217893 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1212 00:29:06.113202  217893 kubeadm.go:602] duration metric: took 17.15275ms to restartPrimaryControlPlane
	I1212 00:29:06.113211  217893 kubeadm.go:403] duration metric: took 62.446229ms to StartCluster
	I1212 00:29:06.113230  217893 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:06.113287  217893 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:29:06.114607  217893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:06.114866  217893 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:29:06.114931  217893 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:29:06.115129  217893 config.go:182] Loaded profile config "pause-108809": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:29:06.117342  217893 out.go:179] * Verifying Kubernetes components...
	I1212 00:29:06.117342  217893 out.go:179] * Enabled addons: 
	I1212 00:29:02.132743  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:02.133157  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:02.632640  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:02.633029  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:03.132717  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:03.133176  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:03.632631  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:03.633068  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:04.132763  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:04.133147  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:04.631777  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:04.632157  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:05.132694  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:05.133087  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:05.632619  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:05.632929  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:06.132601  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:06.132940  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:06.632609  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:06.632958  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:06.118465  217893 addons.go:530] duration metric: took 3.537801ms for enable addons: enabled=[]
	I1212 00:29:06.118515  217893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:06.221097  217893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:29:06.233111  217893 node_ready.go:35] waiting up to 6m0s for node "pause-108809" to be "Ready" ...
	I1212 00:29:06.240103  217893 node_ready.go:49] node "pause-108809" is "Ready"
	I1212 00:29:06.240128  217893 node_ready.go:38] duration metric: took 6.987375ms for node "pause-108809" to be "Ready" ...
	I1212 00:29:06.240143  217893 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:29:06.240193  217893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:29:06.251017  217893 api_server.go:72] duration metric: took 136.117435ms to wait for apiserver process to appear ...
	I1212 00:29:06.251040  217893 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:29:06.251058  217893 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:29:06.254692  217893 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 00:29:06.255486  217893 api_server.go:141] control plane version: v1.34.2
	I1212 00:29:06.255508  217893 api_server.go:131] duration metric: took 4.462067ms to wait for apiserver health ...
	I1212 00:29:06.255517  217893 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:29:06.258652  217893 system_pods.go:59] 7 kube-system pods found
	I1212 00:29:06.258690  217893 system_pods.go:61] "coredns-66bc5c9577-b5lpn" [c278674e-b431-4e23-9b7f-e64bf1141aa8] Running
	I1212 00:29:06.258699  217893 system_pods.go:61] "etcd-pause-108809" [c085d719-7321-4e80-b0be-6a716ee272ca] Running
	I1212 00:29:06.258705  217893 system_pods.go:61] "kindnet-bvmdc" [fdac71f9-d66e-4e40-a014-65e866e5bd85] Running
	I1212 00:29:06.258716  217893 system_pods.go:61] "kube-apiserver-pause-108809" [b2462188-0872-4737-8642-8057f3e1e7c1] Running
	I1212 00:29:06.258726  217893 system_pods.go:61] "kube-controller-manager-pause-108809" [0eb5f9f9-3288-40ac-8adc-eeba09986127] Running
	I1212 00:29:06.258735  217893 system_pods.go:61] "kube-proxy-8psfp" [dd013fac-10d7-44eb-849b-2ba423554488] Running
	I1212 00:29:06.258740  217893 system_pods.go:61] "kube-scheduler-pause-108809" [6f071b53-9b77-4fda-8174-73eeecfa738d] Running
	I1212 00:29:06.258749  217893 system_pods.go:74] duration metric: took 3.222438ms to wait for pod list to return data ...
	I1212 00:29:06.258757  217893 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:29:06.260539  217893 default_sa.go:45] found service account: "default"
	I1212 00:29:06.260555  217893 default_sa.go:55] duration metric: took 1.790262ms for default service account to be created ...
	I1212 00:29:06.260562  217893 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:29:06.262717  217893 system_pods.go:86] 7 kube-system pods found
	I1212 00:29:06.262737  217893 system_pods.go:89] "coredns-66bc5c9577-b5lpn" [c278674e-b431-4e23-9b7f-e64bf1141aa8] Running
	I1212 00:29:06.262743  217893 system_pods.go:89] "etcd-pause-108809" [c085d719-7321-4e80-b0be-6a716ee272ca] Running
	I1212 00:29:06.262746  217893 system_pods.go:89] "kindnet-bvmdc" [fdac71f9-d66e-4e40-a014-65e866e5bd85] Running
	I1212 00:29:06.262750  217893 system_pods.go:89] "kube-apiserver-pause-108809" [b2462188-0872-4737-8642-8057f3e1e7c1] Running
	I1212 00:29:06.262753  217893 system_pods.go:89] "kube-controller-manager-pause-108809" [0eb5f9f9-3288-40ac-8adc-eeba09986127] Running
	I1212 00:29:06.262756  217893 system_pods.go:89] "kube-proxy-8psfp" [dd013fac-10d7-44eb-849b-2ba423554488] Running
	I1212 00:29:06.262759  217893 system_pods.go:89] "kube-scheduler-pause-108809" [6f071b53-9b77-4fda-8174-73eeecfa738d] Running
	I1212 00:29:06.262764  217893 system_pods.go:126] duration metric: took 2.198215ms to wait for k8s-apps to be running ...
	I1212 00:29:06.262770  217893 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:29:06.262805  217893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:29:06.274417  217893 system_svc.go:56] duration metric: took 11.640503ms WaitForService to wait for kubelet
	I1212 00:29:06.274438  217893 kubeadm.go:587] duration metric: took 159.541412ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:29:06.274455  217893 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:29:06.276144  217893 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:29:06.276162  217893 node_conditions.go:123] node cpu capacity is 8
	I1212 00:29:06.276174  217893 node_conditions.go:105] duration metric: took 1.715367ms to run NodePressure ...
	I1212 00:29:06.276184  217893 start.go:242] waiting for startup goroutines ...
	I1212 00:29:06.276190  217893 start.go:247] waiting for cluster config update ...
	I1212 00:29:06.276206  217893 start.go:256] writing updated cluster config ...
	I1212 00:29:06.276518  217893 ssh_runner.go:195] Run: rm -f paused
	I1212 00:29:06.279868  217893 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:29:06.280528  217893 kapi.go:59] client config for pause-108809: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-10975/.minikube/profiles/pause-108809/client.key", CAFile:"/home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:29:06.282539  217893 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b5lpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:06.285632  217893 pod_ready.go:94] pod "coredns-66bc5c9577-b5lpn" is "Ready"
	I1212 00:29:06.285647  217893 pod_ready.go:86] duration metric: took 3.086348ms for pod "coredns-66bc5c9577-b5lpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:06.287173  217893 pod_ready.go:83] waiting for pod "etcd-pause-108809" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:06.290142  217893 pod_ready.go:94] pod "etcd-pause-108809" is "Ready"
	I1212 00:29:06.290157  217893 pod_ready.go:86] duration metric: took 2.969103ms for pod "etcd-pause-108809" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:06.291718  217893 pod_ready.go:83] waiting for pod "kube-apiserver-pause-108809" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:06.294698  217893 pod_ready.go:94] pod "kube-apiserver-pause-108809" is "Ready"
	I1212 00:29:06.294714  217893 pod_ready.go:86] duration metric: took 2.98025ms for pod "kube-apiserver-pause-108809" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:06.296155  217893 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-108809" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:06.683423  217893 pod_ready.go:94] pod "kube-controller-manager-pause-108809" is "Ready"
	I1212 00:29:06.683445  217893 pod_ready.go:86] duration metric: took 387.274617ms for pod "kube-controller-manager-pause-108809" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:06.884285  217893 pod_ready.go:83] waiting for pod "kube-proxy-8psfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:07.284117  217893 pod_ready.go:94] pod "kube-proxy-8psfp" is "Ready"
	I1212 00:29:07.284149  217893 pod_ready.go:86] duration metric: took 399.835854ms for pod "kube-proxy-8psfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:07.483682  217893 pod_ready.go:83] waiting for pod "kube-scheduler-pause-108809" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:07.884442  217893 pod_ready.go:94] pod "kube-scheduler-pause-108809" is "Ready"
	I1212 00:29:07.884468  217893 pod_ready.go:86] duration metric: took 400.76454ms for pod "kube-scheduler-pause-108809" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:29:07.884506  217893 pod_ready.go:40] duration metric: took 1.604612429s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:29:07.926040  217893 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 00:29:07.929006  217893 out.go:179] * Done! kubectl is now configured to use "pause-108809" cluster and "default" namespace by default
	I1212 00:29:03.970255  217460 cli_runner.go:164] Run: docker container inspect NoKubernetes-131237 --format={{.State.Status}}
	I1212 00:29:03.989538  217460 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:29:03.989560  217460 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-131237 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:29:04.032837  217460 cli_runner.go:164] Run: docker container inspect NoKubernetes-131237 --format={{.State.Status}}
	I1212 00:29:04.054742  217460 machine.go:94] provisionDockerMachine start ...
	I1212 00:29:04.054848  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:04.075138  217460 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:04.075432  217460 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1212 00:29:04.075448  217460 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:29:04.076132  217460 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58226->127.0.0.1:33008: read: connection reset by peer
	I1212 00:29:07.208030  217460 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-131237
	
	I1212 00:29:07.208057  217460 ubuntu.go:182] provisioning hostname "NoKubernetes-131237"
	I1212 00:29:07.208127  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:07.225933  217460 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:07.226138  217460 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1212 00:29:07.226150  217460 main.go:143] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-131237 && echo "NoKubernetes-131237" | sudo tee /etc/hostname
	I1212 00:29:07.365351  217460 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-131237
	
	I1212 00:29:07.365436  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:07.384422  217460 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:07.384657  217460 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1212 00:29:07.384676  217460 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-131237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-131237/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-131237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:29:07.514616  217460 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:29:07.514643  217460 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:29:07.514688  217460 ubuntu.go:190] setting up certificates
	I1212 00:29:07.514699  217460 provision.go:84] configureAuth start
	I1212 00:29:07.514748  217460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-131237
	I1212 00:29:07.532462  217460 provision.go:143] copyHostCerts
	I1212 00:29:07.532506  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:29:07.532536  217460 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:29:07.532542  217460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:29:07.532606  217460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:29:07.532688  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:29:07.532715  217460 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:29:07.532725  217460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:29:07.532763  217460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:29:07.532852  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:29:07.532872  217460 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:29:07.532882  217460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:29:07.532922  217460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:29:07.533012  217460 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-131237 san=[127.0.0.1 192.168.85.2 NoKubernetes-131237 localhost minikube]
	I1212 00:29:07.705906  217460 provision.go:177] copyRemoteCerts
	I1212 00:29:07.705964  217460 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:29:07.705996  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:07.723834  217460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/NoKubernetes-131237/id_rsa Username:docker}
	I1212 00:29:07.817989  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:29:07.818057  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:29:07.836508  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:29:07.836570  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 00:29:07.852872  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:29:07.852930  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:29:07.868808  217460 provision.go:87] duration metric: took 354.084635ms to configureAuth
	I1212 00:29:07.868832  217460 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:29:07.868974  217460 config.go:182] Loaded profile config "NoKubernetes-131237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:29:07.869074  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:07.887094  217460 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:07.887376  217460 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1212 00:29:07.887400  217460 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:29:08.166499  217460 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:29:08.166523  217460 machine.go:97] duration metric: took 4.11176039s to provisionDockerMachine
	I1212 00:29:08.166532  217460 client.go:176] duration metric: took 9.018135071s to LocalClient.Create
	I1212 00:29:08.166566  217460 start.go:167] duration metric: took 9.018197315s to libmachine.API.Create "NoKubernetes-131237"
	I1212 00:29:08.166576  217460 start.go:293] postStartSetup for "NoKubernetes-131237" (driver="docker")
	I1212 00:29:08.166585  217460 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:29:08.166640  217460 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:29:08.166680  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:08.184020  217460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/NoKubernetes-131237/id_rsa Username:docker}
	I1212 00:29:08.279387  217460 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:29:08.282796  217460 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:29:08.282832  217460 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:29:08.282844  217460 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:29:08.282896  217460 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:29:08.283001  217460 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:29:08.283014  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> /etc/ssl/certs/145032.pem
	I1212 00:29:08.283126  217460 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:29:08.289988  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:29:08.308710  217460 start.go:296] duration metric: took 142.124054ms for postStartSetup
	I1212 00:29:08.309066  217460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-131237
	I1212 00:29:08.327283  217460 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/config.json ...
	I1212 00:29:08.327539  217460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:29:08.327581  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:08.344943  217460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/NoKubernetes-131237/id_rsa Username:docker}
	I1212 00:29:08.438082  217460 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:29:08.442270  217460 start.go:128] duration metric: took 9.295654265s to createHost
	I1212 00:29:08.442289  217460 start.go:83] releasing machines lock for "NoKubernetes-131237", held for 9.295752357s
	I1212 00:29:08.442353  217460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-131237
	I1212 00:29:08.460120  217460 ssh_runner.go:195] Run: cat /version.json
	I1212 00:29:08.460174  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:08.460227  217460 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:29:08.460293  217460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-131237
	I1212 00:29:08.478806  217460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/NoKubernetes-131237/id_rsa Username:docker}
	I1212 00:29:08.479242  217460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/NoKubernetes-131237/id_rsa Username:docker}
	I1212 00:29:08.570176  217460 ssh_runner.go:195] Run: systemctl --version
	I1212 00:29:08.623548  217460 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:29:08.658361  217460 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:29:08.662793  217460 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:29:08.662844  217460 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:29:08.688794  217460 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:29:08.688812  217460 start.go:496] detecting cgroup driver to use...
	I1212 00:29:08.688845  217460 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:29:08.688885  217460 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:29:08.704803  217460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:29:08.716450  217460 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:29:08.716523  217460 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:29:08.735637  217460 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:29:08.751204  217460 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:29:08.832288  217460 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:29:08.923428  217460 docker.go:234] disabling docker service ...
	I1212 00:29:08.923514  217460 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:29:08.942295  217460 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:29:08.953946  217460 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:29:04.952561  210380 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 00:29:04.952608  210380 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 00:29:09.032921  217460 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:29:09.111048  217460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:29:09.123147  217460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:29:09.136347  217460 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:29:09.136399  217460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:09.145875  217460 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:29:09.145926  217460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:09.154026  217460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:09.162056  217460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:09.170068  217460 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:29:09.177387  217460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:09.185287  217460 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:09.197899  217460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:09.206481  217460 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:29:09.213428  217460 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:29:09.220606  217460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:09.304423  217460 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:29:09.448327  217460 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:29:09.448386  217460 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:29:09.452235  217460 start.go:564] Will wait 60s for crictl version
	I1212 00:29:09.452281  217460 ssh_runner.go:195] Run: which crictl
	I1212 00:29:09.455648  217460 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:29:09.478093  217460 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:29:09.478163  217460 ssh_runner.go:195] Run: crio --version
	I1212 00:29:09.505347  217460 ssh_runner.go:195] Run: crio --version
	I1212 00:29:09.532377  217460 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:29:09.533374  217460 cli_runner.go:164] Run: docker network inspect NoKubernetes-131237 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:29:09.550631  217460 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 00:29:09.554563  217460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:29:09.564705  217460 kubeadm.go:884] updating cluster {Name:NoKubernetes-131237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:NoKubernetes-131237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:29:09.564818  217460 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:29:09.564859  217460 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:29:09.595343  217460 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:29:09.595362  217460 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:29:09.595404  217460 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:29:09.620095  217460 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:29:09.620115  217460 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:29:09.620125  217460 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1212 00:29:09.620221  217460 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=NoKubernetes-131237 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:NoKubernetes-131237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:29:09.620296  217460 ssh_runner.go:195] Run: crio config
	I1212 00:29:09.663515  217460 cni.go:84] Creating CNI manager for ""
	I1212 00:29:09.663539  217460 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:29:09.663554  217460 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:29:09.663575  217460 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:NoKubernetes-131237 NodeName:NoKubernetes-131237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:29:09.663730  217460 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "NoKubernetes-131237"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:29:09.663787  217460 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:29:09.671578  217460 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:29:09.671627  217460 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:29:09.679131  217460 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1212 00:29:09.691262  217460 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:29:09.705567  217460 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1212 00:29:09.717468  217460 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:29:09.720736  217460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:29:09.730066  217460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:09.808155  217460 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:29:09.834726  217460 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237 for IP: 192.168.85.2
	I1212 00:29:09.834746  217460 certs.go:195] generating shared ca certs ...
	I1212 00:29:09.834767  217460 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:09.834914  217460 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:29:09.834961  217460 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:29:09.834972  217460 certs.go:257] generating profile certs ...
	I1212 00:29:09.835020  217460 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/client.key
	I1212 00:29:09.835036  217460 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/client.crt with IP's: []
	I1212 00:29:09.937663  217460 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/client.crt ...
	I1212 00:29:09.937687  217460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/client.crt: {Name:mkc503a8e188c2daef7e6ea4ce52c00d6f7d54a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:09.937839  217460 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/client.key ...
	I1212 00:29:09.937850  217460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/client.key: {Name:mk8a7c5c8996c18a541105e5c4473e25a7caf7f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:09.937924  217460 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/apiserver.key.00e57bd4
	I1212 00:29:09.937938  217460 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/apiserver.crt.00e57bd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1212 00:29:10.040810  217460 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/apiserver.crt.00e57bd4 ...
	I1212 00:29:10.040835  217460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/apiserver.crt.00e57bd4: {Name:mk4f4b537db9f4e4c3bb03f81617189c61dbad95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:10.040970  217460 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/apiserver.key.00e57bd4 ...
	I1212 00:29:10.040984  217460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/apiserver.key.00e57bd4: {Name:mkd325b9a346a12dc06d373199e11815eb176971 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:10.041052  217460 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/apiserver.crt.00e57bd4 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/apiserver.crt
	I1212 00:29:10.041146  217460 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/apiserver.key.00e57bd4 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/apiserver.key
	I1212 00:29:10.041209  217460 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/proxy-client.key
	I1212 00:29:10.041225  217460 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/proxy-client.crt with IP's: []
	I1212 00:29:10.103687  217460 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/proxy-client.crt ...
	I1212 00:29:10.103718  217460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/proxy-client.crt: {Name:mk39641361d07405150c6f1f3a9314b81c9a7bf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:10.103893  217460 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/proxy-client.key ...
	I1212 00:29:10.103911  217460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/proxy-client.key: {Name:mk3b1549d50432c4739c354112178a07c65a71ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:10.104004  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:29:10.104027  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:29:10.104045  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:29:10.104063  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:29:10.104080  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:29:10.104098  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:29:10.104116  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:29:10.104133  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:29:10.104198  217460 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:29:10.104251  217460 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:29:10.104264  217460 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:29:10.104302  217460 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:29:10.104335  217460 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:29:10.104364  217460 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:29:10.104432  217460 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:29:10.104487  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> /usr/share/ca-certificates/145032.pem
	I1212 00:29:10.104511  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:10.104528  217460 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem -> /usr/share/ca-certificates/14503.pem
	I1212 00:29:10.105213  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:29:10.128681  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:29:10.147357  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:29:10.165095  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:29:10.182703  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 00:29:10.199294  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:29:10.216629  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:29:10.232778  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:29:10.248805  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:29:10.266109  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:29:10.282513  217460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:29:10.299285  217460 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:29:10.311786  217460 ssh_runner.go:195] Run: openssl version
	I1212 00:29:10.320082  217460 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:29:10.328758  217460 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:29:10.336667  217460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:29:10.340449  217460 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:29:10.340525  217460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:29:10.378196  217460 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:29:10.385923  217460 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:29:10.393446  217460 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:10.400449  217460 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:29:10.407254  217460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:10.410706  217460 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:10.410753  217460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:10.449040  217460 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:29:10.457066  217460 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:29:10.464981  217460 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:29:10.472663  217460 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:29:10.479972  217460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:29:10.483739  217460 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:29:10.483791  217460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:29:10.531422  217460 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:29:10.539766  217460 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:29:10.548333  217460 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:29:10.552543  217460 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:29:10.552606  217460 kubeadm.go:401] StartCluster: {Name:NoKubernetes-131237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:NoKubernetes-131237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:10.552689  217460 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:29:10.552749  217460 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:29:10.583344  217460 cri.go:89] found id: ""
	I1212 00:29:10.583398  217460 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:29:10.591752  217460 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:29:10.600732  217460 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:29:10.600789  217460 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:29:10.608784  217460 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:29:10.608804  217460 kubeadm.go:158] found existing configuration files:
	
	I1212 00:29:10.608845  217460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:29:10.616326  217460 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:29:10.616372  217460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:29:10.623331  217460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:29:10.631091  217460 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:29:10.631160  217460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:29:10.638181  217460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:29:10.645277  217460 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:29:10.645333  217460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:29:10.652159  217460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:29:10.659178  217460 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:29:10.659242  217460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:29:10.666008  217460 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:29:10.707529  217460 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 00:29:10.707585  217460 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:29:10.727490  217460 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:29:10.727563  217460 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 00:29:10.727616  217460 kubeadm.go:319] OS: Linux
	I1212 00:29:10.727675  217460 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:29:10.727783  217460 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:29:10.727880  217460 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:29:10.727948  217460 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:29:10.728032  217460 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:29:10.728110  217460 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:29:10.728188  217460 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:29:10.728260  217460 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 00:29:10.786960  217460 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:29:10.787136  217460 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:29:10.787298  217460 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:29:10.795142  217460 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:29:07.132649  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:07.132960  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:07.632606  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:07.633018  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:08.132606  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:08.132924  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:08.632212  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:08.632615  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:09.132298  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:09.132685  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:09.632361  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:09.632722  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:10.132418  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:10.132777  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:10.632482  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:10.632786  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:11.132291  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:11.132727  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 00:29:11.632270  209256 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:29:11.632712  209256 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	
	
	==> CRI-O <==
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.955625251Z" level=info msg="RDT not available in the host system"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.955635087Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.95633417Z" level=info msg="Conmon does support the --sync option"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.956348314Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.956360216Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.957007881Z" level=info msg="Conmon does support the --sync option"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.957021031Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.960321289Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.960341873Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.960882912Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.961268519Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 12 00:29:04 pause-108809 crio[2165]: time="2025-12-12T00:29:04.961324399Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.040240515Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-b5lpn Namespace:kube-system ID:17f145c15981f8081147c43a895ff73e90d1598f3aaa25231eb4b32749b591a1 UID:c278674e-b431-4e23-9b7f-e64bf1141aa8 NetNS:/var/run/netns/eee1c12f-be94-4b38-9208-17595f9ef972 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002904b0}] Aliases:map[]}"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.040395585Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-b5lpn for CNI network kindnet (type=ptp)"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.040790398Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.040819271Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.040880523Z" level=info msg="Create NRI interface"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.041018951Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.041033541Z" level=info msg="runtime interface created"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.041042634Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.041048351Z" level=info msg="runtime interface starting up..."
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.041052751Z" level=info msg="starting plugins..."
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.041074328Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:29:05 pause-108809 crio[2165]: time="2025-12-12T00:29:05.041341116Z" level=info msg="No systemd watchdog enabled"
	Dec 12 00:29:05 pause-108809 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	6f11db6d21159       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago       Running             coredns                   0                   17f145c15981f       coredns-66bc5c9577-b5lpn               kube-system
	6ae6620086224       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   55 seconds ago       Running             kube-proxy                0                   3c2b18addbe2a       kube-proxy-8psfp                       kube-system
	90d794ebd94c1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   55 seconds ago       Running             kindnet-cni               0                   8d0c6d868ddba       kindnet-bvmdc                          kube-system
	41de3ec4d5aa2       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Running             kube-scheduler            0                   0201d0420119a       kube-scheduler-pause-108809            kube-system
	1290ae60df498       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   About a minute ago   Running             kube-apiserver            0                   89af789de5631       kube-apiserver-pause-108809            kube-system
	8fb46b8b8615b       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   About a minute ago   Running             kube-controller-manager   0                   1117a810b70b3       kube-controller-manager-pause-108809   kube-system
	2b0ab04eb26df       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Running             etcd                      0                   6dbb936e68013       etcd-pause-108809                      kube-system
	
	
	==> coredns [6f11db6d211593329a49d20c3ba7717c980176c1fc4a2687c0baf90e40821715] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40976 - 53731 "HINFO IN 7448751840446251034.6306811712129152444. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.126159454s
	
	
	==> describe nodes <==
	Name:               pause-108809
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-108809
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=pause-108809
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_28_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:28:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-108809
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:29:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:29:03 +0000   Fri, 12 Dec 2025 00:28:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:29:03 +0000   Fri, 12 Dec 2025 00:28:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:29:03 +0000   Fri, 12 Dec 2025 00:28:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:29:03 +0000   Fri, 12 Dec 2025 00:28:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-108809
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                399173fd-1be6-4e2a-990f-f867e8b96a97
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://Unknown
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-b5lpn                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     56s
	  kube-system                 etcd-pause-108809                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         61s
	  kube-system                 kindnet-bvmdc                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-pause-108809             250m (3%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-pause-108809    200m (2%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-8psfp                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-pause-108809             100m (1%)     0 (0%)      0 (0%)           0 (0%)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 66s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  66s (x8 over 66s)  kubelet          Node pause-108809 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    66s (x8 over 66s)  kubelet          Node pause-108809 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     66s (x8 over 66s)  kubelet          Node pause-108809 status is now: NodeHasSufficientPID
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s                kubelet          Node pause-108809 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s                kubelet          Node pause-108809 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s                kubelet          Node pause-108809 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                node-controller  Node pause-108809 event: Registered Node pause-108809 in Controller
	  Normal  NodeReady                15s                kubelet          Node pause-108809 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [2b0ab04eb26df9b71b29754cf1bd4e0318459af4d689ec2ec659fa10d7e04532] <==
	{"level":"warn","ts":"2025-12-12T00:28:07.858694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.871265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.880429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.889337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.897804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.907906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.915147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.924279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.933603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.942749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.951923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.960950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.970278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.978103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.987366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:07.994791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.002509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.011048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.019756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.033375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.046739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.050428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.060566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.069566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:08.141706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55882","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:29:12 up  1:11,  0 user,  load average: 3.25, 1.99, 1.30
	Linux pause-108809 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [90d794ebd94c1511854fdbd1662d5c751a16b53886cfdfc51cace8fc7f60fb0d] <==
	I1212 00:28:17.084683       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 00:28:17.084944       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1212 00:28:17.085079       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:28:17.085101       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:28:17.085126       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:28:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:28:17.288689       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:28:17.288727       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:28:17.288739       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:28:17.288916       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1212 00:28:47.289371       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1212 00:28:47.289381       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1212 00:28:47.289380       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1212 00:28:47.303766       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1212 00:28:48.788916       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:28:48.788941       1 metrics.go:72] Registering metrics
	I1212 00:28:48.788997       1 controller.go:711] "Syncing nftables rules"
	I1212 00:28:57.289808       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 00:28:57.289858       1 main.go:301] handling current node
	I1212 00:29:07.290992       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 00:29:07.291020       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1290ae60df498ba14c46da67189d27fa731430d7a9bb9f046f812c45953c1041] <==
	I1212 00:28:08.759551       1 policy_source.go:240] refreshing policies
	E1212 00:28:08.765528       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1212 00:28:08.814570       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 00:28:08.815934       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:28:08.816037       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1212 00:28:08.829106       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:28:08.830066       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 00:28:08.941405       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:28:09.617246       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 00:28:09.621346       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 00:28:09.621422       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 00:28:10.029890       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:28:10.070553       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:28:10.120307       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 00:28:10.131716       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1212 00:28:10.133140       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 00:28:10.139320       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:28:10.278515       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:28:11.180098       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:28:11.206279       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 00:28:11.235456       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 00:28:15.978654       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 00:28:16.081264       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:28:16.092986       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:28:16.383034       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8fb46b8b8615b03b0dfec91141c1f50e62e7cb228b01e02fa770caad208e9ab6] <==
	I1212 00:28:15.272718       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 00:28:15.272809       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-108809"
	I1212 00:28:15.272863       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1212 00:28:15.273896       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 00:28:15.273923       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1212 00:28:15.273970       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 00:28:15.274221       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 00:28:15.274316       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 00:28:15.275083       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 00:28:15.275115       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 00:28:15.276276       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1212 00:28:15.276294       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 00:28:15.276884       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 00:28:15.279101       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 00:28:15.279109       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1212 00:28:15.279152       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 00:28:15.280352       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1212 00:28:15.280465       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 00:28:15.280711       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 00:28:15.281452       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 00:28:15.287648       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 00:28:15.287667       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 00:28:15.287675       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 00:28:15.298821       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 00:29:00.279726       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6ae66200862240b5d6ceae4c86c30cafe17708f8a56361f62fa78dab5081c690] <==
	I1212 00:28:16.894241       1 server_linux.go:53] "Using iptables proxy"
	I1212 00:28:16.975533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 00:28:17.075881       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 00:28:17.075926       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1212 00:28:17.076039       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:28:17.107025       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:28:17.107236       1 server_linux.go:132] "Using iptables Proxier"
	I1212 00:28:17.115329       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:28:17.115790       1 server.go:527] "Version info" version="v1.34.2"
	I1212 00:28:17.115905       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:28:17.118247       1 config.go:200] "Starting service config controller"
	I1212 00:28:17.118252       1 config.go:309] "Starting node config controller"
	I1212 00:28:17.118275       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:28:17.118279       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:28:17.118286       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 00:28:17.118298       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:28:17.118300       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:28:17.118304       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:28:17.118305       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:28:17.218952       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 00:28:17.218981       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 00:28:17.219001       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [41de3ec4d5aa25f201a8754434478b30d08711863b18624c5bb2fdd05571d3af] <==
	E1212 00:28:08.697646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 00:28:08.697906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 00:28:08.697983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 00:28:08.698108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 00:28:08.698208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 00:28:08.698268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 00:28:08.698300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 00:28:08.698390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 00:28:08.698517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 00:28:08.698602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 00:28:08.698732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 00:28:08.698981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 00:28:08.699703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 00:28:08.700090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 00:28:08.700164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 00:28:08.700833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 00:28:09.507945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 00:28:09.535962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 00:28:09.563245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1212 00:28:09.690957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 00:28:09.700200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 00:28:09.727402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 00:28:09.788331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 00:28:09.851647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1212 00:28:12.592645       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 00:28:57 pause-108809 kubelet[1312]: I1212 00:28:57.563366    1312 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 12 00:28:57 pause-108809 kubelet[1312]: I1212 00:28:57.657217    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df62z\" (UniqueName: \"kubernetes.io/projected/c278674e-b431-4e23-9b7f-e64bf1141aa8-kube-api-access-df62z\") pod \"coredns-66bc5c9577-b5lpn\" (UID: \"c278674e-b431-4e23-9b7f-e64bf1141aa8\") " pod="kube-system/coredns-66bc5c9577-b5lpn"
	Dec 12 00:28:57 pause-108809 kubelet[1312]: I1212 00:28:57.657267    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c278674e-b431-4e23-9b7f-e64bf1141aa8-config-volume\") pod \"coredns-66bc5c9577-b5lpn\" (UID: \"c278674e-b431-4e23-9b7f-e64bf1141aa8\") " pod="kube-system/coredns-66bc5c9577-b5lpn"
	Dec 12 00:28:58 pause-108809 kubelet[1312]: I1212 00:28:58.454283    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-b5lpn" podStartSLOduration=42.454261559 podStartE2EDuration="42.454261559s" podCreationTimestamp="2025-12-12 00:28:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:28:58.454060167 +0000 UTC m=+47.420702923" watchObservedRunningTime="2025-12-12 00:28:58.454261559 +0000 UTC m=+47.420904311"
	Dec 12 00:29:02 pause-108809 kubelet[1312]: W1212 00:29:02.447834    1312 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 12 00:29:02 pause-108809 kubelet[1312]: E1212 00:29:02.447927    1312 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 12 00:29:02 pause-108809 kubelet[1312]: E1212 00:29:02.447976    1312 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 12 00:29:02 pause-108809 kubelet[1312]: E1212 00:29:02.447992    1312 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 12 00:29:02 pause-108809 kubelet[1312]: W1212 00:29:02.548279    1312 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 12 00:29:02 pause-108809 kubelet[1312]: W1212 00:29:02.689090    1312 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 12 00:29:02 pause-108809 kubelet[1312]: W1212 00:29:02.910433    1312 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 12 00:29:03 pause-108809 kubelet[1312]: E1212 00:29:03.028997    1312 log.go:32] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 12 00:29:03 pause-108809 kubelet[1312]: E1212 00:29:03.316703    1312 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 12 00:29:03 pause-108809 kubelet[1312]: E1212 00:29:03.316819    1312 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 12 00:29:03 pause-108809 kubelet[1312]: E1212 00:29:03.316843    1312 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 12 00:29:03 pause-108809 kubelet[1312]: E1212 00:29:03.316861    1312 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 12 00:29:03 pause-108809 kubelet[1312]: W1212 00:29:03.336941    1312 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 12 00:29:03 pause-108809 kubelet[1312]: E1212 00:29:03.448737    1312 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 12 00:29:03 pause-108809 kubelet[1312]: E1212 00:29:03.448790    1312 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 12 00:29:03 pause-108809 kubelet[1312]: E1212 00:29:03.448807    1312 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 12 00:29:08 pause-108809 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 00:29:08 pause-108809 kubelet[1312]: I1212 00:29:08.324231    1312 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 12 00:29:08 pause-108809 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 00:29:08 pause-108809 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:29:08 pause-108809 systemd[1]: kubelet.service: Consumed 2.055s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-108809 -n pause-108809
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-108809 -n pause-108809: exit status 2 (322.453735ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-108809 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-743506 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-743506 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (240.099371ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:33:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-743506 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-743506 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-743506 describe deploy/metrics-server -n kube-system: exit status 1 (57.689464ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-743506 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-743506
helpers_test.go:244: (dbg) docker inspect old-k8s-version-743506:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671",
	        "Created": "2025-12-12T00:32:56.81457716Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272443,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:32:57.088966832Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671/hosts",
	        "LogPath": "/var/lib/docker/containers/e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671/e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671-json.log",
	        "Name": "/old-k8s-version-743506",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-743506:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-743506",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671",
	                "LowerDir": "/var/lib/docker/overlay2/19303aab7b27fd5dd820b5a2006a30ec77cb6cfa1aae06803e3c9f01f549c5bc-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/19303aab7b27fd5dd820b5a2006a30ec77cb6cfa1aae06803e3c9f01f549c5bc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/19303aab7b27fd5dd820b5a2006a30ec77cb6cfa1aae06803e3c9f01f549c5bc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/19303aab7b27fd5dd820b5a2006a30ec77cb6cfa1aae06803e3c9f01f549c5bc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-743506",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-743506/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-743506",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-743506",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-743506",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9418e4e059e83668f6eefd437a6fb98640c0391f0fd83d524b6c1c56c76d7285",
	            "SandboxKey": "/var/run/docker/netns/9418e4e059e8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-743506": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cdcdaa73e4279f08edc884bf6d2244b7a4df03294612c4ea8561dd87e0d0ec16",
	                    "EndpointID": "bcc33963730152043a4bdf74b0f1e9b62279077681854a43355e1ff4518f5262",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "16:ff:ac:96:f8:ac",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-743506",
	                        "e6e7fe2ace92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-743506 -n old-k8s-version-743506
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-743506 logs -n 25
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ stop    │ -p NoKubernetes-131237                                                                                                                                                                                                                        │ NoKubernetes-131237       │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:30 UTC │
	│ start   │ -p NoKubernetes-131237 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-131237       │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:30 UTC │
	│ ssh     │ -p NoKubernetes-131237 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-131237       │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │                     │
	│ delete  │ -p NoKubernetes-131237                                                                                                                                                                                                                        │ NoKubernetes-131237       │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:30 UTC │
	│ start   │ -p force-systemd-flag-610815 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-610815 │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:30 UTC │
	│ ssh     │ force-systemd-flag-610815 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-610815 │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:30 UTC │
	│ delete  │ -p force-systemd-flag-610815                                                                                                                                                                                                                  │ force-systemd-flag-610815 │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:30 UTC │
	│ start   │ -p missing-upgrade-038405 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-038405    │ jenkins │ v1.35.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:31 UTC │
	│ start   │ -p missing-upgrade-038405 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-038405    │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ delete  │ -p missing-upgrade-038405                                                                                                                                                                                                                     │ missing-upgrade-038405    │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ start   │ -p kubernetes-upgrade-605797 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-605797 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:32 UTC │
	│ stop    │ -p kubernetes-upgrade-605797                                                                                                                                                                                                                  │ kubernetes-upgrade-605797 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p kubernetes-upgrade-605797 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-605797 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │                     │
	│ delete  │ -p stopped-upgrade-148693                                                                                                                                                                                                                     │ stopped-upgrade-148693    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p cert-options-319518 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-319518       │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ start   │ -p cert-expiration-673665 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-673665    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ delete  │ -p running-upgrade-299658                                                                                                                                                                                                                     │ running-upgrade-299658    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-743506    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ delete  │ -p cert-expiration-673665                                                                                                                                                                                                                     │ cert-expiration-673665    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-675290         │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ cert-options-319518 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-319518       │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ -p cert-options-319518 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-319518       │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ delete  │ -p cert-options-319518                                                                                                                                                                                                                        │ cert-options-319518       │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-858659        │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-743506 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-743506    │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:33:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:33:14.727394  278144 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:33:14.727527  278144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:33:14.727540  278144 out.go:374] Setting ErrFile to fd 2...
	I1212 00:33:14.727548  278144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:33:14.727835  278144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:33:14.728471  278144 out.go:368] Setting JSON to false
	I1212 00:33:14.730016  278144 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4541,"bootTime":1765495054,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:33:14.730096  278144 start.go:143] virtualization: kvm guest
	I1212 00:33:14.732300  278144 out.go:179] * [embed-certs-858659] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:33:14.733495  278144 notify.go:221] Checking for updates...
	I1212 00:33:14.733499  278144 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:33:14.734863  278144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:33:14.736177  278144 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:33:14.737704  278144 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:33:14.738840  278144 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:33:14.739929  278144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:33:14.741610  278144 config.go:182] Loaded profile config "kubernetes-upgrade-605797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:33:14.741755  278144 config.go:182] Loaded profile config "no-preload-675290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:33:14.741905  278144 config.go:182] Loaded profile config "old-k8s-version-743506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1212 00:33:14.742081  278144 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:33:14.769409  278144 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:33:14.769547  278144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:33:14.824953  278144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-12 00:33:14.815394412 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:33:14.825066  278144 docker.go:319] overlay module found
	I1212 00:33:14.827615  278144 out.go:179] * Using the docker driver based on user configuration
	I1212 00:33:14.828650  278144 start.go:309] selected driver: docker
	I1212 00:33:14.828664  278144 start.go:927] validating driver "docker" against <nil>
	I1212 00:33:14.828675  278144 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:33:14.829458  278144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:33:14.887224  278144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-12 00:33:14.877162845 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:33:14.887403  278144 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 00:33:14.887687  278144 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:33:14.891675  278144 out.go:179] * Using Docker driver with root privileges
	I1212 00:33:14.892895  278144 cni.go:84] Creating CNI manager for ""
	I1212 00:33:14.893000  278144 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:33:14.893016  278144 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:33:14.893119  278144 start.go:353] cluster config:
	{Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:33:14.894469  278144 out.go:179] * Starting "embed-certs-858659" primary control-plane node in "embed-certs-858659" cluster
	I1212 00:33:14.895585  278144 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:33:14.896800  278144 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:33:14.897761  278144 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:33:14.897795  278144 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 00:33:14.897814  278144 cache.go:65] Caching tarball of preloaded images
	I1212 00:33:14.897820  278144 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:33:14.897914  278144 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:33:14.897930  278144 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 00:33:14.898070  278144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/config.json ...
	I1212 00:33:14.898101  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/config.json: {Name:mka3ad5a51f2e77701ec67a66227f8bb0b6994ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:14.921669  278144 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:33:14.921689  278144 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:33:14.921708  278144 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:33:14.921743  278144 start.go:360] acquireMachinesLock for embed-certs-858659: {Name:mk65733daa8eb01c9a3ad2d27b0888c2a1a8b319 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:33:14.921849  278144 start.go:364] duration metric: took 84.758µs to acquireMachinesLock for "embed-certs-858659"
	I1212 00:33:14.921880  278144 start.go:93] Provisioning new machine with config: &{Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:33:14.921967  278144 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:33:11.914966  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:12.414641  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:12.914599  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:13.414343  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:13.914224  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:14.414638  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:14.914709  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:15.414947  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:15.915234  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:16.414519  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:14.795605  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 00:33:14.795652  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:12.673526  272590 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.421983552s)
	I1212 00:33:12.673554  272590 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-10975/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1212 00:33:12.673588  272590 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 00:33:12.673653  272590 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 00:33:14.248095  272590 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.574416538s)
	I1212 00:33:14.248127  272590 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-10975/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1212 00:33:14.248166  272590 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 00:33:14.248233  272590 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 00:33:14.813113  272590 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-10975/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 00:33:14.813155  272590 cache_images.go:125] Successfully loaded all cached images
	I1212 00:33:14.813162  272590 cache_images.go:94] duration metric: took 10.342276233s to LoadCachedImages
	I1212 00:33:14.813176  272590 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 00:33:14.813282  272590 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-675290 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-675290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:33:14.813361  272590 ssh_runner.go:195] Run: crio config
	I1212 00:33:14.863560  272590 cni.go:84] Creating CNI manager for ""
	I1212 00:33:14.863584  272590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:33:14.863605  272590 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:33:14.863636  272590 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-675290 NodeName:no-preload-675290 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:33:14.863772  272590 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-675290"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:33:14.863848  272590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:33:14.873217  272590 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1212 00:33:14.873288  272590 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:33:14.881909  272590 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1212 00:33:14.881961  272590 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22101-10975/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1212 00:33:14.882020  272590 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1212 00:33:14.881963  272590 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22101-10975/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1212 00:33:14.886285  272590 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1212 00:33:14.886316  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1212 00:33:15.884197  272590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:33:15.898090  272590 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1212 00:33:15.901895  272590 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1212 00:33:15.901925  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1212 00:33:16.266026  272590 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1212 00:33:16.270813  272590 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1212 00:33:16.270848  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1212 00:33:16.443667  272590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:33:16.461915  272590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:33:16.477224  272590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:33:16.496004  272590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1212 00:33:16.509653  272590 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:33:16.513544  272590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:33:16.524548  272590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:16.615408  272590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:16.649109  272590 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290 for IP: 192.168.76.2
	I1212 00:33:16.649132  272590 certs.go:195] generating shared ca certs ...
	I1212 00:33:16.649153  272590 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:16.649332  272590 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:33:16.649377  272590 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:33:16.649387  272590 certs.go:257] generating profile certs ...
	I1212 00:33:16.649448  272590 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.key
	I1212 00:33:16.649462  272590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.crt with IP's: []
	I1212 00:33:16.748107  272590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.crt ...
	I1212 00:33:16.748134  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.crt: {Name:mk00022ca9e5428de7e5a583050d69c3c5c2bf31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:16.748331  272590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.key ...
	I1212 00:33:16.748345  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.key: {Name:mkd4f1314753e5364a38754983dd2956364020bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:16.748457  272590 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key.56c2be46
	I1212 00:33:16.748482  272590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt.56c2be46 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1212 00:33:17.118681  272590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt.56c2be46 ...
	I1212 00:33:17.118712  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt.56c2be46: {Name:mk0a1028bc5d92431abb55b4e7c2d66cfbf9c8a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:17.118911  272590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key.56c2be46 ...
	I1212 00:33:17.118935  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key.56c2be46: {Name:mk90ce7b5c7ba44e4d4cdb05bfe31ea45a556159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:17.119082  272590 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt.56c2be46 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt
	I1212 00:33:17.119258  272590 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key.56c2be46 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key
	I1212 00:33:17.119356  272590 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.key
	I1212 00:33:17.119383  272590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.crt with IP's: []
	I1212 00:33:17.206397  272590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.crt ...
	I1212 00:33:17.206424  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.crt: {Name:mk4dba249fcf82c557c21e700f31c0a67e228b72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:17.206613  272590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.key ...
	I1212 00:33:17.206637  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.key: {Name:mkd43d2f214a353e18ef7df608cd9a29775c0278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:17.206850  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:33:17.206900  272590 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:33:17.206928  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:33:17.206975  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:33:17.207014  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:33:17.207055  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:33:17.207133  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:33:17.207775  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:33:17.225767  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:33:17.243136  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:33:17.260318  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:33:17.277081  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:33:17.294021  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:33:17.311412  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:33:17.329281  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:33:17.346588  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:33:14.923847  278144 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 00:33:14.924337  278144 start.go:159] libmachine.API.Create for "embed-certs-858659" (driver="docker")
	I1212 00:33:14.924390  278144 client.go:173] LocalClient.Create starting
	I1212 00:33:14.924524  278144 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem
	I1212 00:33:14.924577  278144 main.go:143] libmachine: Decoding PEM data...
	I1212 00:33:14.924604  278144 main.go:143] libmachine: Parsing certificate...
	I1212 00:33:14.924685  278144 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem
	I1212 00:33:14.924718  278144 main.go:143] libmachine: Decoding PEM data...
	I1212 00:33:14.924746  278144 main.go:143] libmachine: Parsing certificate...
	I1212 00:33:14.925302  278144 cli_runner.go:164] Run: docker network inspect embed-certs-858659 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:33:14.950174  278144 cli_runner.go:211] docker network inspect embed-certs-858659 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:33:14.950269  278144 network_create.go:284] running [docker network inspect embed-certs-858659] to gather additional debugging logs...
	I1212 00:33:14.950295  278144 cli_runner.go:164] Run: docker network inspect embed-certs-858659
	W1212 00:33:14.974761  278144 cli_runner.go:211] docker network inspect embed-certs-858659 returned with exit code 1
	I1212 00:33:14.974794  278144 network_create.go:287] error running [docker network inspect embed-certs-858659]: docker network inspect embed-certs-858659: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-858659 not found
	I1212 00:33:14.974816  278144 network_create.go:289] output of [docker network inspect embed-certs-858659]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-858659 not found
	
	** /stderr **
	I1212 00:33:14.975003  278144 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:33:15.001326  278144 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ad72354fdcf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:1e:a7:00:22:62} reservation:<nil>}
	I1212 00:33:15.002281  278144 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-18bf2e8051c8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:8e:8f:a6:9d:0c} reservation:<nil>}
	I1212 00:33:15.003308  278144 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e5251ddf0a35 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:5a:cf:fd:1d:2a} reservation:<nil>}
	I1212 00:33:15.004265  278144 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f766d8223619 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:f6:b4:47:2f:69:da} reservation:<nil>}
	I1212 00:33:15.005079  278144 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f077d203a2ba IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:8e:69:f0:f2:3a:5d} reservation:<nil>}
	I1212 00:33:15.006240  278144 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb2ab0}
	I1212 00:33:15.006276  278144 network_create.go:124] attempt to create docker network embed-certs-858659 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1212 00:33:15.006349  278144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-858659 embed-certs-858659
	I1212 00:33:15.070104  278144 network_create.go:108] docker network embed-certs-858659 192.168.94.0/24 created
	I1212 00:33:15.070132  278144 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-858659" container
	I1212 00:33:15.070189  278144 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:33:15.087641  278144 cli_runner.go:164] Run: docker volume create embed-certs-858659 --label name.minikube.sigs.k8s.io=embed-certs-858659 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:33:15.104140  278144 oci.go:103] Successfully created a docker volume embed-certs-858659
	I1212 00:33:15.104214  278144 cli_runner.go:164] Run: docker run --rm --name embed-certs-858659-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-858659 --entrypoint /usr/bin/test -v embed-certs-858659:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 00:33:16.284491  278144 cli_runner.go:217] Completed: docker run --rm --name embed-certs-858659-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-858659 --entrypoint /usr/bin/test -v embed-certs-858659:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.180222233s)
	I1212 00:33:16.284534  278144 oci.go:107] Successfully prepared a docker volume embed-certs-858659
	I1212 00:33:16.284589  278144 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:33:16.284601  278144 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:33:16.284643  278144 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-858659:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:33:16.914882  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:17.414926  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:17.914528  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:18.414943  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:18.914927  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:19.414842  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:19.914278  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:20.414704  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:20.914717  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:21.414504  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:19.592618  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:49408->192.168.85.2:8443: read: connection reset by peer
	I1212 00:33:19.592823  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:19.593231  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:19.791619  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:19.792087  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:20.291785  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:20.292172  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:20.791618  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:20.791974  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:21.291768  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:21.292177  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:21.790791  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:21.791240  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:17.366333  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:33:17.384570  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:33:17.401944  272590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:33:17.414391  272590 ssh_runner.go:195] Run: openssl version
	I1212 00:33:17.421380  272590 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:33:17.428974  272590 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:33:17.436624  272590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:33:17.440514  272590 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:33:17.440571  272590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:33:17.476855  272590 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:33:17.485655  272590 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:33:17.494695  272590 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:33:17.502643  272590 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:33:17.510565  272590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:33:17.514372  272590 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:33:17.514428  272590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:33:17.550028  272590 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:33:17.558154  272590 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:33:17.566696  272590 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:17.574819  272590 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:33:17.584533  272590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:17.588378  272590 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:17.588451  272590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:17.624118  272590 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:33:17.631989  272590 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:33:17.639398  272590 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:33:17.643488  272590 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:33:17.643544  272590 kubeadm.go:401] StartCluster: {Name:no-preload-675290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-675290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:33:17.643628  272590 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:33:17.643690  272590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:33:17.673075  272590 cri.go:89] found id: ""
	I1212 00:33:17.673155  272590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:33:17.681463  272590 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:33:17.689858  272590 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:33:17.689918  272590 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:33:17.697594  272590 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:33:17.697617  272590 kubeadm.go:158] found existing configuration files:
	
	I1212 00:33:17.697658  272590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:33:17.705097  272590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:33:17.705142  272590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:33:17.712562  272590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:33:17.720683  272590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:33:17.720733  272590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:33:17.728201  272590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:33:17.735840  272590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:33:17.735895  272590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:33:17.743333  272590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:33:17.750913  272590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:33:17.750963  272590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:33:17.757854  272590 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:33:17.864943  272590 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 00:33:17.921915  272590 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:33:21.914513  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:22.415154  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:22.914233  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:23.013859  270803 kubeadm.go:1114] duration metric: took 11.693512767s to wait for elevateKubeSystemPrivileges
	I1212 00:33:23.013899  270803 kubeadm.go:403] duration metric: took 21.39235223s to StartCluster
	I1212 00:33:23.013922  270803 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:23.014005  270803 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:33:23.015046  270803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:23.015303  270803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:33:23.015318  270803 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:33:23.015380  270803 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:33:23.015468  270803 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-743506"
	I1212 00:33:23.015507  270803 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-743506"
	I1212 00:33:23.015517  270803 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-743506"
	I1212 00:33:23.015542  270803 host.go:66] Checking if "old-k8s-version-743506" exists ...
	I1212 00:33:23.015545  270803 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-743506"
	I1212 00:33:23.015549  270803 config.go:182] Loaded profile config "old-k8s-version-743506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1212 00:33:23.015961  270803 cli_runner.go:164] Run: docker container inspect old-k8s-version-743506 --format={{.State.Status}}
	I1212 00:33:23.016070  270803 cli_runner.go:164] Run: docker container inspect old-k8s-version-743506 --format={{.State.Status}}
	I1212 00:33:23.016790  270803 out.go:179] * Verifying Kubernetes components...
	I1212 00:33:23.018074  270803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:23.042393  270803 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-743506"
	I1212 00:33:23.042558  270803 host.go:66] Checking if "old-k8s-version-743506" exists ...
	I1212 00:33:23.042986  270803 cli_runner.go:164] Run: docker container inspect old-k8s-version-743506 --format={{.State.Status}}
	I1212 00:33:23.043608  270803 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:33:23.044740  270803 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:23.044768  270803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:33:23.044818  270803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743506
	I1212 00:33:23.079331  270803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/old-k8s-version-743506/id_rsa Username:docker}
	I1212 00:33:23.080872  270803 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:23.080897  270803 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:33:23.080957  270803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743506
	I1212 00:33:23.105874  270803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/old-k8s-version-743506/id_rsa Username:docker}
	I1212 00:33:23.153068  270803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:33:23.193030  270803 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:23.203058  270803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:23.226507  270803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:23.451216  270803 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1212 00:33:23.452428  270803 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-743506" to be "Ready" ...
	I1212 00:33:23.654056  270803 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 00:33:20.370047  278144 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-858659:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.085338961s)
	I1212 00:33:20.370083  278144 kic.go:203] duration metric: took 4.085477303s to extract preloaded images to volume ...
	W1212 00:33:20.370184  278144 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 00:33:20.370229  278144 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 00:33:20.370279  278144 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:33:20.431590  278144 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-858659 --name embed-certs-858659 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-858659 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-858659 --network embed-certs-858659 --ip 192.168.94.2 --volume embed-certs-858659:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 00:33:20.738217  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Running}}
	I1212 00:33:20.759182  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:20.778854  278144 cli_runner.go:164] Run: docker exec embed-certs-858659 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:33:20.828668  278144 oci.go:144] the created container "embed-certs-858659" has a running status.
	I1212 00:33:20.828706  278144 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa...
	I1212 00:33:20.965886  278144 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:33:20.996346  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:21.014956  278144 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:33:21.014979  278144 kic_runner.go:114] Args: [docker exec --privileged embed-certs-858659 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:33:21.082712  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:21.106586  278144 machine.go:94] provisionDockerMachine start ...
	I1212 00:33:21.106678  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:21.131359  278144 main.go:143] libmachine: Using SSH client type: native
	I1212 00:33:21.131707  278144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1212 00:33:21.131730  278144 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:33:21.276119  278144 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-858659
	
	I1212 00:33:21.276147  278144 ubuntu.go:182] provisioning hostname "embed-certs-858659"
	I1212 00:33:21.276210  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:21.297386  278144 main.go:143] libmachine: Using SSH client type: native
	I1212 00:33:21.297706  278144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1212 00:33:21.297733  278144 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-858659 && echo "embed-certs-858659" | sudo tee /etc/hostname
	I1212 00:33:21.451952  278144 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-858659
	
	I1212 00:33:21.452044  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:21.476325  278144 main.go:143] libmachine: Using SSH client type: native
	I1212 00:33:21.476652  278144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1212 00:33:21.476682  278144 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-858659' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-858659/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-858659' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:33:21.621125  278144 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:33:21.621157  278144 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:33:21.621206  278144 ubuntu.go:190] setting up certificates
	I1212 00:33:21.621218  278144 provision.go:84] configureAuth start
	I1212 00:33:21.621282  278144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:33:21.642072  278144 provision.go:143] copyHostCerts
	I1212 00:33:21.642136  278144 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:33:21.642150  278144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:33:21.642232  278144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:33:21.642360  278144 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:33:21.642374  278144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:33:21.642414  278144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:33:21.642534  278144 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:33:21.642548  278144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:33:21.642588  278144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:33:21.642676  278144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.embed-certs-858659 san=[127.0.0.1 192.168.94.2 embed-certs-858659 localhost minikube]
	I1212 00:33:21.788738  278144 provision.go:177] copyRemoteCerts
	I1212 00:33:21.788793  278144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:33:21.788830  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:21.806536  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:21.918035  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:33:21.945961  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:33:21.965649  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 00:33:21.983776  278144 provision.go:87] duration metric: took 362.534714ms to configureAuth
	I1212 00:33:21.983806  278144 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:33:21.984002  278144 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:33:21.984122  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:22.014438  278144 main.go:143] libmachine: Using SSH client type: native
	I1212 00:33:22.014755  278144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1212 00:33:22.014780  278144 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:33:22.307190  278144 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:33:22.307211  278144 machine.go:97] duration metric: took 1.200604199s to provisionDockerMachine
	I1212 00:33:22.307228  278144 client.go:176] duration metric: took 7.38282296s to LocalClient.Create
	I1212 00:33:22.307253  278144 start.go:167] duration metric: took 7.38291887s to libmachine.API.Create "embed-certs-858659"
	I1212 00:33:22.307266  278144 start.go:293] postStartSetup for "embed-certs-858659" (driver="docker")
	I1212 00:33:22.307280  278144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:33:22.307346  278144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:33:22.307394  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:22.325538  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:22.424001  278144 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:33:22.427488  278144 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:33:22.427521  278144 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:33:22.427537  278144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:33:22.427586  278144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:33:22.427662  278144 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:33:22.427750  278144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:33:22.435119  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:33:22.455192  278144 start.go:296] duration metric: took 147.912888ms for postStartSetup
	I1212 00:33:22.455613  278144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:33:22.474728  278144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/config.json ...
	I1212 00:33:22.474959  278144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:33:22.474994  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:22.492769  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:22.583869  278144 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:33:22.588020  278144 start.go:128] duration metric: took 7.666039028s to createHost
	I1212 00:33:22.588046  278144 start.go:83] releasing machines lock for "embed-certs-858659", held for 7.666181656s
	I1212 00:33:22.588106  278144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:33:22.604658  278144 ssh_runner.go:195] Run: cat /version.json
	I1212 00:33:22.604702  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:22.604722  278144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:33:22.604782  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:22.624351  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:22.624645  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:22.766960  278144 ssh_runner.go:195] Run: systemctl --version
	I1212 00:33:22.773040  278144 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:33:22.807594  278144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:33:22.813502  278144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:33:22.813578  278144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:33:22.848889  278144 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:33:22.848911  278144 start.go:496] detecting cgroup driver to use...
	I1212 00:33:22.848946  278144 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:33:22.849004  278144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:33:22.866214  278144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:33:22.883089  278144 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:33:22.883154  278144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:33:22.903153  278144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:33:22.928607  278144 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:33:23.051147  278144 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:33:23.207194  278144 docker.go:234] disabling docker service ...
	I1212 00:33:23.207354  278144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:33:23.232382  278144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:33:23.250204  278144 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:33:23.374364  278144 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:33:23.499465  278144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:33:23.517532  278144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:33:23.538077  278144 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:33:23.538180  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.551014  278144 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:33:23.551078  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.562817  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.574022  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.585051  278144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:33:23.594447  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.605211  278144 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.621829  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.632339  278144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:33:23.642118  278144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:33:23.651626  278144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:23.735724  278144 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:33:23.880406  278144 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:33:23.880467  278144 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:33:23.884392  278144 start.go:564] Will wait 60s for crictl version
	I1212 00:33:23.884439  278144 ssh_runner.go:195] Run: which crictl
	I1212 00:33:23.887821  278144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:33:23.911824  278144 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:33:23.911892  278144 ssh_runner.go:195] Run: crio --version
	I1212 00:33:23.938726  278144 ssh_runner.go:195] Run: crio --version
	I1212 00:33:23.967016  278144 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:33:23.968117  278144 cli_runner.go:164] Run: docker network inspect embed-certs-858659 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:33:23.986276  278144 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1212 00:33:23.990186  278144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:33:24.000018  278144 kubeadm.go:884] updating cluster {Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:33:24.000130  278144 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:33:24.000180  278144 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:33:24.032804  278144 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:33:24.032832  278144 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:33:24.032890  278144 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:33:24.062431  278144 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:33:24.062456  278144 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:33:24.062465  278144 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1212 00:33:24.062645  278144 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-858659 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:33:24.062725  278144 ssh_runner.go:195] Run: crio config
	I1212 00:33:24.109010  278144 cni.go:84] Creating CNI manager for ""
	I1212 00:33:24.109037  278144 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:33:24.109054  278144 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:33:24.109075  278144 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-858659 NodeName:embed-certs-858659 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:33:24.109202  278144 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-858659"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:33:24.109260  278144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:33:24.117514  278144 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:33:24.117569  278144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:33:24.125056  278144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1212 00:33:24.137518  278144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:33:24.151599  278144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1212 00:33:24.163535  278144 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:33:24.166948  278144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:33:24.176282  278144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:24.267072  278144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:24.289673  278144 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659 for IP: 192.168.94.2
	I1212 00:33:24.289695  278144 certs.go:195] generating shared ca certs ...
	I1212 00:33:24.289716  278144 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.289894  278144 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:33:24.289977  278144 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:33:24.289996  278144 certs.go:257] generating profile certs ...
	I1212 00:33:24.290069  278144 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.key
	I1212 00:33:24.290095  278144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.crt with IP's: []
	I1212 00:33:24.425621  278144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.crt ...
	I1212 00:33:24.425651  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.crt: {Name:mk45b9fc7c32e03cd8b8b253cee0beecc89168ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.425825  278144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.key ...
	I1212 00:33:24.425841  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.key: {Name:mk7969034f40478ebc3fcd8da2e89e524ba77096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.425960  278144 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key.89584afc
	I1212 00:33:24.425981  278144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt.89584afc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1212 00:33:24.627246  278144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt.89584afc ...
	I1212 00:33:24.627271  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt.89584afc: {Name:mkf363b5d4278a387e18f286b3c76b364b923111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.627425  278144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key.89584afc ...
	I1212 00:33:24.627438  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key.89584afc: {Name:mkb8910c32db51006465f917ed06964af4a9674d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.627524  278144 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt.89584afc -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt
	I1212 00:33:24.627596  278144 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key.89584afc -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key
	I1212 00:33:24.627649  278144 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key
	I1212 00:33:24.627670  278144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.crt with IP's: []
	I1212 00:33:24.683493  278144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.crt ...
	I1212 00:33:24.683515  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.crt: {Name:mk4c83643d0e89a51dc996cf2dabd1ed6bdbf2e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.683642  278144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key ...
	I1212 00:33:24.683664  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key: {Name:mkf320b4e9041cf5c42937a4ade7d266ee3cce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.683853  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:33:24.683891  278144 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:33:24.683898  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:33:24.683920  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:33:24.683943  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:33:24.683966  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:33:24.684004  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:33:24.684593  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:33:24.702599  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:33:24.719710  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:33:25.192170  272590 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:33:25.192235  272590 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:33:25.192361  272590 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:33:25.192432  272590 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 00:33:25.192519  272590 kubeadm.go:319] OS: Linux
	I1212 00:33:25.192568  272590 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:33:25.192609  272590 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:33:25.192664  272590 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:33:25.192706  272590 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:33:25.192747  272590 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:33:25.192786  272590 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:33:25.192831  272590 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:33:25.192889  272590 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 00:33:25.192982  272590 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:33:25.193125  272590 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:33:25.193255  272590 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:33:25.193337  272590 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:33:25.194568  272590 out.go:252]   - Generating certificates and keys ...
	I1212 00:33:25.194653  272590 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:33:25.194746  272590 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:33:25.194846  272590 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:33:25.194937  272590 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:33:25.195025  272590 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:33:25.195094  272590 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:33:25.195179  272590 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:33:25.195303  272590 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-675290] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 00:33:25.195397  272590 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:33:25.195573  272590 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-675290] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 00:33:25.195669  272590 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:33:25.195763  272590 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:33:25.195833  272590 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:33:25.195930  272590 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:33:25.196016  272590 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:33:25.196099  272590 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:33:25.196177  272590 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:33:25.196289  272590 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:33:25.196385  272590 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:33:25.196532  272590 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:33:25.196627  272590 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:33:25.197638  272590 out.go:252]   - Booting up control plane ...
	I1212 00:33:25.197759  272590 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:33:25.197871  272590 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:33:25.197963  272590 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:33:25.198118  272590 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:33:25.198245  272590 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:33:25.198333  272590 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:33:25.198415  272590 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:33:25.198461  272590 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:33:25.198654  272590 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:33:25.198773  272590 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:33:25.198875  272590 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001760288s
	I1212 00:33:25.199015  272590 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:33:25.199122  272590 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1212 00:33:25.199238  272590 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:33:25.199343  272590 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:33:25.199415  272590 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 505.336113ms
	I1212 00:33:25.199497  272590 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.771564525s
	I1212 00:33:25.199552  272590 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501309707s
	I1212 00:33:25.199690  272590 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:33:25.199852  272590 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:33:25.199905  272590 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:33:25.200119  272590 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-675290 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:33:25.200203  272590 kubeadm.go:319] [bootstrap-token] Using token: 6k5hlm.3rv4y6xn4tgjibyr
	I1212 00:33:25.201870  272590 out.go:252]   - Configuring RBAC rules ...
	I1212 00:33:25.201974  272590 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:33:25.202073  272590 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:33:25.202233  272590 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:33:25.202380  272590 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:33:25.202516  272590 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:33:25.202618  272590 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:33:25.202732  272590 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:33:25.202779  272590 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 00:33:25.202821  272590 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 00:33:25.202826  272590 kubeadm.go:319] 
	I1212 00:33:25.202910  272590 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 00:33:25.202924  272590 kubeadm.go:319] 
	I1212 00:33:25.203016  272590 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 00:33:25.203024  272590 kubeadm.go:319] 
	I1212 00:33:25.203058  272590 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 00:33:25.203145  272590 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:33:25.203208  272590 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:33:25.203217  272590 kubeadm.go:319] 
	I1212 00:33:25.203293  272590 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 00:33:25.203301  272590 kubeadm.go:319] 
	I1212 00:33:25.203364  272590 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:33:25.203385  272590 kubeadm.go:319] 
	I1212 00:33:25.203462  272590 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 00:33:25.203594  272590 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:33:25.203668  272590 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:33:25.203679  272590 kubeadm.go:319] 
	I1212 00:33:25.203748  272590 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:33:25.203818  272590 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 00:33:25.203825  272590 kubeadm.go:319] 
	I1212 00:33:25.203893  272590 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6k5hlm.3rv4y6xn4tgjibyr \
	I1212 00:33:25.203984  272590 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f \
	I1212 00:33:25.204004  272590 kubeadm.go:319] 	--control-plane 
	I1212 00:33:25.204007  272590 kubeadm.go:319] 
	I1212 00:33:25.204131  272590 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:33:25.204141  272590 kubeadm.go:319] 
	I1212 00:33:25.204227  272590 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6k5hlm.3rv4y6xn4tgjibyr \
	I1212 00:33:25.204320  272590 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f 
	I1212 00:33:25.204332  272590 cni.go:84] Creating CNI manager for ""
	I1212 00:33:25.204338  272590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:33:25.205594  272590 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 00:33:23.655263  270803 addons.go:530] duration metric: took 639.880413ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 00:33:23.955245  270803 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-743506" context rescaled to 1 replicas
	W1212 00:33:25.457032  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	I1212 00:33:22.290852  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:22.291316  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:22.791613  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:22.791924  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:23.291830  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:23.292241  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:23.791654  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:23.793187  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:24.291592  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:24.291924  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:24.791612  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:24.792009  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:25.291649  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:25.292046  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:25.791751  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:25.792178  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:26.291614  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:26.291985  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:26.791624  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:26.792019  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:25.206570  272590 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:33:25.210928  272590 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1212 00:33:25.210950  272590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 00:33:25.224287  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:33:25.457167  272590 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:33:25.457324  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:25.457418  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-675290 minikube.k8s.io/updated_at=2025_12_12T00_33_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=no-preload-675290 minikube.k8s.io/primary=true
	I1212 00:33:25.476212  272590 ops.go:34] apiserver oom_adj: -16
	I1212 00:33:25.564286  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:26.065329  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:26.564328  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:27.064508  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:24.736576  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:33:24.755063  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 00:33:24.774545  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:33:24.793295  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:33:24.812705  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:33:24.830500  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:33:24.850525  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:33:24.868692  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:33:24.886268  278144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:33:24.897891  278144 ssh_runner.go:195] Run: openssl version
	I1212 00:33:24.904042  278144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:33:24.911176  278144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:33:24.918098  278144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:33:24.921629  278144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:33:24.921676  278144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:33:24.959657  278144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:33:24.967294  278144 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:33:24.975343  278144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:24.982273  278144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:33:24.990092  278144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:24.993779  278144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:24.993827  278144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:25.031998  278144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:33:25.039700  278144 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:33:25.048043  278144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:33:25.056012  278144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:33:25.064509  278144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:33:25.069175  278144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:33:25.069228  278144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:33:25.108584  278144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:33:25.117805  278144 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:33:25.124959  278144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:33:25.128496  278144 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:33:25.128549  278144 kubeadm.go:401] StartCluster: {Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:33:25.128616  278144 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:33:25.128656  278144 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:33:25.152735  278144 cri.go:89] found id: ""
	I1212 00:33:25.152791  278144 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:33:25.160002  278144 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:33:25.168186  278144 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:33:25.168235  278144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:33:25.175527  278144 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:33:25.175541  278144 kubeadm.go:158] found existing configuration files:
	
	I1212 00:33:25.175574  278144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:33:25.182726  278144 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:33:25.182769  278144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:33:25.190663  278144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:33:25.198827  278144 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:33:25.198870  278144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:33:25.206417  278144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:33:25.214331  278144 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:33:25.214380  278144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:33:25.221328  278144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:33:25.228771  278144 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:33:25.228812  278144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:33:25.236619  278144 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:33:25.298265  278144 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 00:33:25.365354  278144 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:33:27.564979  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:28.064794  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:28.564602  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:29.064565  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:29.565218  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:30.064844  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:30.133149  272590 kubeadm.go:1114] duration metric: took 4.675879902s to wait for elevateKubeSystemPrivileges
	I1212 00:33:30.133195  272590 kubeadm.go:403] duration metric: took 12.489653684s to StartCluster
	I1212 00:33:30.133220  272590 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:30.133290  272590 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:33:30.134407  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:30.145606  272590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:33:30.145618  272590 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:33:30.145650  272590 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:33:30.145741  272590 addons.go:70] Setting storage-provisioner=true in profile "no-preload-675290"
	I1212 00:33:30.145762  272590 addons.go:239] Setting addon storage-provisioner=true in "no-preload-675290"
	I1212 00:33:30.145799  272590 host.go:66] Checking if "no-preload-675290" exists ...
	I1212 00:33:30.145837  272590 config.go:182] Loaded profile config "no-preload-675290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:33:30.145762  272590 addons.go:70] Setting default-storageclass=true in profile "no-preload-675290"
	I1212 00:33:30.145893  272590 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-675290"
	I1212 00:33:30.146227  272590 cli_runner.go:164] Run: docker container inspect no-preload-675290 --format={{.State.Status}}
	I1212 00:33:30.146407  272590 cli_runner.go:164] Run: docker container inspect no-preload-675290 --format={{.State.Status}}
	I1212 00:33:30.208678  272590 addons.go:239] Setting addon default-storageclass=true in "no-preload-675290"
	I1212 00:33:30.208723  272590 host.go:66] Checking if "no-preload-675290" exists ...
	I1212 00:33:30.209146  272590 cli_runner.go:164] Run: docker container inspect no-preload-675290 --format={{.State.Status}}
	I1212 00:33:30.209555  272590 out.go:179] * Verifying Kubernetes components...
	I1212 00:33:30.228632  272590 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:30.229410  272590 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:33:30.229504  272590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-675290
	I1212 00:33:30.230540  272590 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:33:30.231654  272590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:30.232880  272590 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:30.232898  272590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:33:30.232958  272590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-675290
	I1212 00:33:30.241902  272590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:33:30.255977  272590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/no-preload-675290/id_rsa Username:docker}
	I1212 00:33:30.260883  272590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/no-preload-675290/id_rsa Username:docker}
	I1212 00:33:30.396215  272590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:30.411003  272590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:30.418969  272590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:30.499202  272590 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1212 00:33:30.732276  272590 node_ready.go:35] waiting up to 6m0s for node "no-preload-675290" to be "Ready" ...
	I1212 00:33:30.733577  272590 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1212 00:33:27.955456  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	W1212 00:33:30.457818  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	I1212 00:33:27.291880  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:27.292353  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:27.790970  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:27.791329  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:28.290880  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:28.291231  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:28.790864  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:28.791237  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:29.290866  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:29.291244  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:29.790872  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:29.791234  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:30.291461  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:30.291886  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:30.791224  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:30.734544  272590 addons.go:530] duration metric: took 588.890424ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1212 00:33:31.004952  272590 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-675290" context rescaled to 1 replicas
	W1212 00:33:32.955094  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	W1212 00:33:34.955638  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	I1212 00:33:35.791724  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 00:33:35.791770  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:37.194116  278144 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 00:33:37.194203  278144 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:33:37.194314  278144 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:33:37.194389  278144 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 00:33:37.194458  278144 kubeadm.go:319] OS: Linux
	I1212 00:33:37.194544  278144 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:33:37.194613  278144 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:33:37.194687  278144 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:33:37.194756  278144 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:33:37.194825  278144 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:33:37.194905  278144 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:33:37.194979  278144 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:33:37.195045  278144 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 00:33:37.195123  278144 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:33:37.195248  278144 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:33:37.195386  278144 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:33:37.195465  278144 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:33:37.196692  278144 out.go:252]   - Generating certificates and keys ...
	I1212 00:33:37.196747  278144 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:33:37.196815  278144 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:33:37.196870  278144 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:33:37.196966  278144 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:33:37.197058  278144 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:33:37.197125  278144 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:33:37.197200  278144 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:33:37.197369  278144 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-858659 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 00:33:37.197413  278144 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:33:37.197552  278144 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-858659 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 00:33:37.197607  278144 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:33:37.197660  278144 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:33:37.197696  278144 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:33:37.197749  278144 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:33:37.197790  278144 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:33:37.197879  278144 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:33:37.197966  278144 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:33:37.198076  278144 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:33:37.198145  278144 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:33:37.198253  278144 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:33:37.198355  278144 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:33:37.199406  278144 out.go:252]   - Booting up control plane ...
	I1212 00:33:37.199517  278144 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:33:37.199604  278144 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:33:37.199697  278144 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:33:37.199790  278144 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:33:37.199864  278144 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:33:37.199961  278144 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:33:37.200052  278144 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:33:37.200115  278144 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:33:37.200307  278144 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:33:37.200411  278144 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:33:37.200521  278144 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001833759s
	I1212 00:33:37.200639  278144 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:33:37.200752  278144 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1212 00:33:37.200863  278144 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:33:37.200962  278144 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:33:37.201076  278144 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.434651471s
	I1212 00:33:37.201168  278144 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.026988685s
	I1212 00:33:37.201264  278144 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00162745s
	I1212 00:33:37.201389  278144 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:33:37.201534  278144 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:33:37.201603  278144 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:33:37.201804  278144 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-858659 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:33:37.201887  278144 kubeadm.go:319] [bootstrap-token] Using token: ggt060.eefg72qnn6nqw2lf
	I1212 00:33:37.203122  278144 out.go:252]   - Configuring RBAC rules ...
	I1212 00:33:37.203215  278144 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:33:37.203290  278144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:33:37.203414  278144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:33:37.203586  278144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:33:37.203694  278144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:33:37.203763  278144 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:33:37.203865  278144 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:33:37.203907  278144 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 00:33:37.203946  278144 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 00:33:37.203952  278144 kubeadm.go:319] 
	I1212 00:33:37.204010  278144 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 00:33:37.204016  278144 kubeadm.go:319] 
	I1212 00:33:37.204085  278144 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 00:33:37.204091  278144 kubeadm.go:319] 
	I1212 00:33:37.204114  278144 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 00:33:37.204164  278144 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:33:37.204212  278144 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:33:37.204218  278144 kubeadm.go:319] 
	I1212 00:33:37.204259  278144 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 00:33:37.204264  278144 kubeadm.go:319] 
	I1212 00:33:37.204300  278144 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:33:37.204305  278144 kubeadm.go:319] 
	I1212 00:33:37.204349  278144 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 00:33:37.204441  278144 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:33:37.204532  278144 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:33:37.204545  278144 kubeadm.go:319] 
	I1212 00:33:37.204614  278144 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:33:37.204686  278144 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 00:33:37.204691  278144 kubeadm.go:319] 
	I1212 00:33:37.204771  278144 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ggt060.eefg72qnn6nqw2lf \
	I1212 00:33:37.204864  278144 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f \
	I1212 00:33:37.204883  278144 kubeadm.go:319] 	--control-plane 
	I1212 00:33:37.204886  278144 kubeadm.go:319] 
	I1212 00:33:37.204951  278144 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:33:37.204957  278144 kubeadm.go:319] 
	I1212 00:33:37.205046  278144 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ggt060.eefg72qnn6nqw2lf \
	I1212 00:33:37.205184  278144 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f 
	I1212 00:33:37.205197  278144 cni.go:84] Creating CNI manager for ""
	I1212 00:33:37.205206  278144 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:33:37.206375  278144 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1212 00:33:32.734941  272590 node_ready.go:57] node "no-preload-675290" has "Ready":"False" status (will retry)
	W1212 00:33:34.735837  272590 node_ready.go:57] node "no-preload-675290" has "Ready":"False" status (will retry)
	W1212 00:33:37.235035  272590 node_ready.go:57] node "no-preload-675290" has "Ready":"False" status (will retry)
	I1212 00:33:37.207280  278144 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:33:37.211289  278144 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 00:33:37.211304  278144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 00:33:37.224403  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:33:37.418751  278144 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:33:37.418835  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:37.418886  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-858659 minikube.k8s.io/updated_at=2025_12_12T00_33_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=embed-certs-858659 minikube.k8s.io/primary=true
	I1212 00:33:37.430124  278144 ops.go:34] apiserver oom_adj: -16
	I1212 00:33:37.494810  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:37.995607  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:38.495663  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:38.995202  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:39.495603  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1212 00:33:36.956141  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	I1212 00:33:38.455307  270803 node_ready.go:49] node "old-k8s-version-743506" is "Ready"
	I1212 00:33:38.455334  270803 node_ready.go:38] duration metric: took 15.002846535s for node "old-k8s-version-743506" to be "Ready" ...
	I1212 00:33:38.455348  270803 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:33:38.455398  270803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:33:38.467187  270803 api_server.go:72] duration metric: took 15.451831949s to wait for apiserver process to appear ...
	I1212 00:33:38.467214  270803 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:33:38.467240  270803 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:33:38.471174  270803 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1212 00:33:38.474455  270803 api_server.go:141] control plane version: v1.28.0
	I1212 00:33:38.474507  270803 api_server.go:131] duration metric: took 7.285296ms to wait for apiserver health ...
	I1212 00:33:38.474518  270803 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:33:38.478117  270803 system_pods.go:59] 8 kube-system pods found
	I1212 00:33:38.478153  270803 system_pods.go:61] "coredns-5dd5756b68-nxwdc" [e73711a2-208b-41a6-a47f-6253638cfdf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:38.478162  270803 system_pods.go:61] "etcd-old-k8s-version-743506" [945b4dc3-f44c-48ec-8a03-ec0d012b5e09] Running
	I1212 00:33:38.478171  270803 system_pods.go:61] "kindnet-s2gvw" [21d0881b-1da3-4a1d-967d-8f108d5d8a1f] Running
	I1212 00:33:38.478177  270803 system_pods.go:61] "kube-apiserver-old-k8s-version-743506" [13ddd011-c962-4b44-8b26-b80bf8df1e13] Running
	I1212 00:33:38.478183  270803 system_pods.go:61] "kube-controller-manager-old-k8s-version-743506" [3df133c8-f899-4f35-b4bf-d1849a7262e7] Running
	I1212 00:33:38.478189  270803 system_pods.go:61] "kube-proxy-pz8kt" [671c52f4-19ce-4b7b-8e97-e43f64cd4aeb] Running
	I1212 00:33:38.478195  270803 system_pods.go:61] "kube-scheduler-old-k8s-version-743506" [3e6b9a7d-5346-47dd-af74-f9f8b6904163] Running
	I1212 00:33:38.478204  270803 system_pods.go:61] "storage-provisioner" [ccc4d4a0-b9c6-4653-90dc-113128acc782] Running
	I1212 00:33:38.478211  270803 system_pods.go:74] duration metric: took 3.685843ms to wait for pod list to return data ...
	I1212 00:33:38.478222  270803 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:33:38.480216  270803 default_sa.go:45] found service account: "default"
	I1212 00:33:38.480238  270803 default_sa.go:55] duration metric: took 2.008735ms for default service account to be created ...
	I1212 00:33:38.480247  270803 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:33:38.483031  270803 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:38.483061  270803 system_pods.go:89] "coredns-5dd5756b68-nxwdc" [e73711a2-208b-41a6-a47f-6253638cfdf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:38.483070  270803 system_pods.go:89] "etcd-old-k8s-version-743506" [945b4dc3-f44c-48ec-8a03-ec0d012b5e09] Running
	I1212 00:33:38.483077  270803 system_pods.go:89] "kindnet-s2gvw" [21d0881b-1da3-4a1d-967d-8f108d5d8a1f] Running
	I1212 00:33:38.483087  270803 system_pods.go:89] "kube-apiserver-old-k8s-version-743506" [13ddd011-c962-4b44-8b26-b80bf8df1e13] Running
	I1212 00:33:38.483097  270803 system_pods.go:89] "kube-controller-manager-old-k8s-version-743506" [3df133c8-f899-4f35-b4bf-d1849a7262e7] Running
	I1212 00:33:38.483104  270803 system_pods.go:89] "kube-proxy-pz8kt" [671c52f4-19ce-4b7b-8e97-e43f64cd4aeb] Running
	I1212 00:33:38.483109  270803 system_pods.go:89] "kube-scheduler-old-k8s-version-743506" [3e6b9a7d-5346-47dd-af74-f9f8b6904163] Running
	I1212 00:33:38.483118  270803 system_pods.go:89] "storage-provisioner" [ccc4d4a0-b9c6-4653-90dc-113128acc782] Running
	I1212 00:33:38.483126  270803 system_pods.go:126] duration metric: took 2.872475ms to wait for k8s-apps to be running ...
	I1212 00:33:38.483137  270803 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:33:38.483182  270803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:33:38.496061  270803 system_svc.go:56] duration metric: took 12.919771ms WaitForService to wait for kubelet
	I1212 00:33:38.496082  270803 kubeadm.go:587] duration metric: took 15.480732762s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:33:38.496102  270803 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:33:38.497977  270803 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:33:38.497995  270803 node_conditions.go:123] node cpu capacity is 8
	I1212 00:33:38.498010  270803 node_conditions.go:105] duration metric: took 1.903088ms to run NodePressure ...
	I1212 00:33:38.498022  270803 start.go:242] waiting for startup goroutines ...
	I1212 00:33:38.498030  270803 start.go:247] waiting for cluster config update ...
	I1212 00:33:38.498043  270803 start.go:256] writing updated cluster config ...
	I1212 00:33:38.498328  270803 ssh_runner.go:195] Run: rm -f paused
	I1212 00:33:38.501782  270803 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:33:38.505265  270803 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-nxwdc" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.510596  270803 pod_ready.go:94] pod "coredns-5dd5756b68-nxwdc" is "Ready"
	I1212 00:33:39.510619  270803 pod_ready.go:86] duration metric: took 1.005335277s for pod "coredns-5dd5756b68-nxwdc" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.513364  270803 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.517516  270803 pod_ready.go:94] pod "etcd-old-k8s-version-743506" is "Ready"
	I1212 00:33:39.517534  270803 pod_ready.go:86] duration metric: took 4.146163ms for pod "etcd-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.520326  270803 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.524429  270803 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-743506" is "Ready"
	I1212 00:33:39.524446  270803 pod_ready.go:86] duration metric: took 4.103471ms for pod "kube-apiserver-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.527203  270803 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.709024  270803 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-743506" is "Ready"
	I1212 00:33:39.709046  270803 pod_ready.go:86] duration metric: took 181.825306ms for pod "kube-controller-manager-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.909520  270803 pod_ready.go:83] waiting for pod "kube-proxy-pz8kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:40.308323  270803 pod_ready.go:94] pod "kube-proxy-pz8kt" is "Ready"
	I1212 00:33:40.308348  270803 pod_ready.go:86] duration metric: took 398.805252ms for pod "kube-proxy-pz8kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:40.509941  270803 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:40.908891  270803 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-743506" is "Ready"
	I1212 00:33:40.908911  270803 pod_ready.go:86] duration metric: took 398.947384ms for pod "kube-scheduler-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:40.908922  270803 pod_ready.go:40] duration metric: took 2.407114173s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:33:40.958106  270803 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1212 00:33:40.959982  270803 out.go:203] 
	W1212 00:33:40.961226  270803 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1212 00:33:40.962401  270803 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1212 00:33:40.963860  270803 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-743506" cluster and "default" namespace by default
	I1212 00:33:40.795082  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 00:33:40.795136  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:33:40.795187  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:33:40.820906  263844 cri.go:89] found id: "e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:33:40.820925  263844 cri.go:89] found id: "e5dcbdf2edc04b11a39a16ad0e5bded31475ff0b3d269fa7ad2a0b58adf37254"
	I1212 00:33:40.820929  263844 cri.go:89] found id: ""
	I1212 00:33:40.820936  263844 logs.go:282] 2 containers: [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106 e5dcbdf2edc04b11a39a16ad0e5bded31475ff0b3d269fa7ad2a0b58adf37254]
	I1212 00:33:40.820987  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:40.824897  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:40.828680  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:33:40.828744  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:33:40.854459  263844 cri.go:89] found id: ""
	I1212 00:33:40.854509  263844 logs.go:282] 0 containers: []
	W1212 00:33:40.854518  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:33:40.854526  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:33:40.854579  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:33:40.881533  263844 cri.go:89] found id: ""
	I1212 00:33:40.881555  263844 logs.go:282] 0 containers: []
	W1212 00:33:40.881564  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:33:40.881572  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:33:40.881630  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:33:40.906345  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:33:40.906365  263844 cri.go:89] found id: ""
	I1212 00:33:40.906374  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:33:40.906435  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:40.910519  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:33:40.910577  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:33:40.939434  263844 cri.go:89] found id: ""
	I1212 00:33:40.939463  263844 logs.go:282] 0 containers: []
	W1212 00:33:40.939499  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:33:40.939509  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:33:40.939555  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:33:40.966792  263844 cri.go:89] found id: "b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0"
	I1212 00:33:40.966812  263844 cri.go:89] found id: "962f34c2233cccd7ae7eb721e09c901ae4656fc23b8aa6f3a2f3688152708c6e"
	I1212 00:33:40.966818  263844 cri.go:89] found id: ""
	I1212 00:33:40.966826  263844 logs.go:282] 2 containers: [b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0 962f34c2233cccd7ae7eb721e09c901ae4656fc23b8aa6f3a2f3688152708c6e]
	I1212 00:33:40.966878  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:40.970829  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:40.974717  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:33:40.974776  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:33:41.005958  263844 cri.go:89] found id: ""
	I1212 00:33:41.005985  263844 logs.go:282] 0 containers: []
	W1212 00:33:41.005996  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:33:41.006006  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:33:41.006056  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:33:41.039359  263844 cri.go:89] found id: ""
	I1212 00:33:41.039388  263844 logs.go:282] 0 containers: []
	W1212 00:33:41.039399  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:33:41.039418  263844 logs.go:123] Gathering logs for kube-controller-manager [962f34c2233cccd7ae7eb721e09c901ae4656fc23b8aa6f3a2f3688152708c6e] ...
	I1212 00:33:41.039433  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 962f34c2233cccd7ae7eb721e09c901ae4656fc23b8aa6f3a2f3688152708c6e"
	I1212 00:33:41.071692  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:33:41.071722  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:33:41.114665  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:33:41.114701  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:33:41.158370  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:33:41.158395  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:33:41.172978  263844 logs.go:123] Gathering logs for kube-apiserver [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106] ...
	I1212 00:33:41.173006  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:33:41.203032  263844 logs.go:123] Gathering logs for kube-apiserver [e5dcbdf2edc04b11a39a16ad0e5bded31475ff0b3d269fa7ad2a0b58adf37254] ...
	I1212 00:33:41.203057  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5dcbdf2edc04b11a39a16ad0e5bded31475ff0b3d269fa7ad2a0b58adf37254"
	I1212 00:33:41.234616  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:33:41.234651  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:33:41.261883  263844 logs.go:123] Gathering logs for kube-controller-manager [b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0] ...
	I1212 00:33:41.261908  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0"
	I1212 00:33:41.288586  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:33:41.288610  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:33:41.345462  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:33:41.345501  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:33:39.235689  272590 node_ready.go:57] node "no-preload-675290" has "Ready":"False" status (will retry)
	W1212 00:33:41.735356  272590 node_ready.go:57] node "no-preload-675290" has "Ready":"False" status (will retry)
	I1212 00:33:39.995687  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:40.495092  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:40.995851  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:41.495552  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:41.995217  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:42.495439  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:42.557808  278144 kubeadm.go:1114] duration metric: took 5.139031528s to wait for elevateKubeSystemPrivileges
	I1212 00:33:42.557850  278144 kubeadm.go:403] duration metric: took 17.429303229s to StartCluster
	I1212 00:33:42.557872  278144 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:42.557936  278144 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:33:42.559776  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:42.560013  278144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:33:42.560028  278144 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:33:42.560006  278144 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:33:42.560108  278144 addons.go:70] Setting default-storageclass=true in profile "embed-certs-858659"
	I1212 00:33:42.560154  278144 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-858659"
	I1212 00:33:42.560226  278144 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:33:42.560101  278144 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-858659"
	I1212 00:33:42.560305  278144 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-858659"
	I1212 00:33:42.560343  278144 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:33:42.560558  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:42.560804  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:42.562345  278144 out.go:179] * Verifying Kubernetes components...
	I1212 00:33:42.563485  278144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:42.582693  278144 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:33:42.583659  278144 addons.go:239] Setting addon default-storageclass=true in "embed-certs-858659"
	I1212 00:33:42.583715  278144 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:33:42.584226  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:42.586890  278144 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:42.586913  278144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:33:42.586987  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:42.611417  278144 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:42.611444  278144 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:33:42.611665  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:42.616944  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:42.634991  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:42.644809  278144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:33:42.696105  278144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:42.731013  278144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:42.747779  278144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:42.816002  278144 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1212 00:33:42.817999  278144 node_ready.go:35] waiting up to 6m0s for node "embed-certs-858659" to be "Ready" ...
	I1212 00:33:43.019945  278144 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 00:33:43.021070  278144 addons.go:530] duration metric: took 461.030823ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 00:33:43.319944  278144 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-858659" context rescaled to 1 replicas
	I1212 00:33:43.734653  272590 node_ready.go:49] node "no-preload-675290" is "Ready"
	I1212 00:33:43.734705  272590 node_ready.go:38] duration metric: took 13.002376355s for node "no-preload-675290" to be "Ready" ...
	I1212 00:33:43.734724  272590 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:33:43.734797  272590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:33:43.750081  272590 api_server.go:72] duration metric: took 13.604432741s to wait for apiserver process to appear ...
	I1212 00:33:43.750104  272590 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:33:43.750123  272590 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:33:43.755396  272590 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 00:33:43.756073  272590 api_server.go:141] control plane version: v1.35.0-beta.0
	I1212 00:33:43.756093  272590 api_server.go:131] duration metric: took 5.983405ms to wait for apiserver health ...
	I1212 00:33:43.756101  272590 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:33:43.759052  272590 system_pods.go:59] 8 kube-system pods found
	I1212 00:33:43.759083  272590 system_pods.go:61] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:43.759090  272590 system_pods.go:61] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running
	I1212 00:33:43.759097  272590 system_pods.go:61] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:33:43.759103  272590 system_pods.go:61] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running
	I1212 00:33:43.759118  272590 system_pods.go:61] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running
	I1212 00:33:43.759126  272590 system_pods.go:61] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:33:43.759132  272590 system_pods.go:61] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running
	I1212 00:33:43.759150  272590 system_pods.go:61] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:43.759160  272590 system_pods.go:74] duration metric: took 3.053065ms to wait for pod list to return data ...
	I1212 00:33:43.759171  272590 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:33:43.761049  272590 default_sa.go:45] found service account: "default"
	I1212 00:33:43.761074  272590 default_sa.go:55] duration metric: took 1.895794ms for default service account to be created ...
	I1212 00:33:43.761081  272590 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:33:43.763348  272590 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:43.763384  272590 system_pods.go:89] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:43.763391  272590 system_pods.go:89] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running
	I1212 00:33:43.763399  272590 system_pods.go:89] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:33:43.763404  272590 system_pods.go:89] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running
	I1212 00:33:43.763419  272590 system_pods.go:89] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running
	I1212 00:33:43.763425  272590 system_pods.go:89] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:33:43.763430  272590 system_pods.go:89] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running
	I1212 00:33:43.763439  272590 system_pods.go:89] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:43.763507  272590 retry.go:31] will retry after 264.298758ms: missing components: kube-dns
	I1212 00:33:44.031211  272590 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:44.031246  272590 system_pods.go:89] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:44.031253  272590 system_pods.go:89] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running
	I1212 00:33:44.031262  272590 system_pods.go:89] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:33:44.031268  272590 system_pods.go:89] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running
	I1212 00:33:44.031276  272590 system_pods.go:89] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running
	I1212 00:33:44.031281  272590 system_pods.go:89] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:33:44.031286  272590 system_pods.go:89] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running
	I1212 00:33:44.031294  272590 system_pods.go:89] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:44.031311  272590 retry.go:31] will retry after 311.660302ms: missing components: kube-dns
	I1212 00:33:44.346179  272590 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:44.346210  272590 system_pods.go:89] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:44.346216  272590 system_pods.go:89] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running
	I1212 00:33:44.346220  272590 system_pods.go:89] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:33:44.346224  272590 system_pods.go:89] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running
	I1212 00:33:44.346229  272590 system_pods.go:89] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running
	I1212 00:33:44.346232  272590 system_pods.go:89] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:33:44.346235  272590 system_pods.go:89] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running
	I1212 00:33:44.346240  272590 system_pods.go:89] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:44.346254  272590 retry.go:31] will retry after 325.219552ms: missing components: kube-dns
	I1212 00:33:44.674796  272590 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:44.674821  272590 system_pods.go:89] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Running
	I1212 00:33:44.674827  272590 system_pods.go:89] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running
	I1212 00:33:44.674831  272590 system_pods.go:89] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:33:44.674834  272590 system_pods.go:89] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running
	I1212 00:33:44.674839  272590 system_pods.go:89] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running
	I1212 00:33:44.674842  272590 system_pods.go:89] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:33:44.674847  272590 system_pods.go:89] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running
	I1212 00:33:44.674850  272590 system_pods.go:89] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Running
	I1212 00:33:44.674858  272590 system_pods.go:126] duration metric: took 913.771066ms to wait for k8s-apps to be running ...
	I1212 00:33:44.674868  272590 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:33:44.674910  272590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:33:44.687374  272590 system_svc.go:56] duration metric: took 12.493284ms WaitForService to wait for kubelet
	I1212 00:33:44.687396  272590 kubeadm.go:587] duration metric: took 14.541752044s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:33:44.687415  272590 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:33:44.689839  272590 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:33:44.689859  272590 node_conditions.go:123] node cpu capacity is 8
	I1212 00:33:44.689872  272590 node_conditions.go:105] duration metric: took 2.452154ms to run NodePressure ...
	I1212 00:33:44.689883  272590 start.go:242] waiting for startup goroutines ...
	I1212 00:33:44.689889  272590 start.go:247] waiting for cluster config update ...
	I1212 00:33:44.689899  272590 start.go:256] writing updated cluster config ...
	I1212 00:33:44.690128  272590 ssh_runner.go:195] Run: rm -f paused
	I1212 00:33:44.693896  272590 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:33:44.696772  272590 pod_ready.go:83] waiting for pod "coredns-7d764666f9-44t4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.700545  272590 pod_ready.go:94] pod "coredns-7d764666f9-44t4m" is "Ready"
	I1212 00:33:44.700562  272590 pod_ready.go:86] duration metric: took 3.773553ms for pod "coredns-7d764666f9-44t4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.702235  272590 pod_ready.go:83] waiting for pod "etcd-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.705546  272590 pod_ready.go:94] pod "etcd-no-preload-675290" is "Ready"
	I1212 00:33:44.705562  272590 pod_ready.go:86] duration metric: took 3.311339ms for pod "etcd-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.707183  272590 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.710720  272590 pod_ready.go:94] pod "kube-apiserver-no-preload-675290" is "Ready"
	I1212 00:33:44.710738  272590 pod_ready.go:86] duration metric: took 3.539875ms for pod "kube-apiserver-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.712525  272590 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:45.097959  272590 pod_ready.go:94] pod "kube-controller-manager-no-preload-675290" is "Ready"
	I1212 00:33:45.097987  272590 pod_ready.go:86] duration metric: took 385.439817ms for pod "kube-controller-manager-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:45.297886  272590 pod_ready.go:83] waiting for pod "kube-proxy-7pxpp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:45.697387  272590 pod_ready.go:94] pod "kube-proxy-7pxpp" is "Ready"
	I1212 00:33:45.697415  272590 pod_ready.go:86] duration metric: took 399.505552ms for pod "kube-proxy-7pxpp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:45.898558  272590 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:46.298339  272590 pod_ready.go:94] pod "kube-scheduler-no-preload-675290" is "Ready"
	I1212 00:33:46.298363  272590 pod_ready.go:86] duration metric: took 399.784653ms for pod "kube-scheduler-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:46.298375  272590 pod_ready.go:40] duration metric: took 1.604453206s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:33:46.341140  272590 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1212 00:33:46.342768  272590 out.go:179] * Done! kubectl is now configured to use "no-preload-675290" cluster and "default" namespace by default
	W1212 00:33:44.820992  278144 node_ready.go:57] node "embed-certs-858659" has "Ready":"False" status (will retry)
	W1212 00:33:47.321019  278144 node_ready.go:57] node "embed-certs-858659" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 12 00:33:38 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:38.337186058Z" level=info msg="Starting container: 226437d96c8961fbce6c260727e2f6caa7cbebdf6560b7210365593b2a822e9f" id=eb447c5c-3822-4439-9e41-08f594c8945b name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:33:38 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:38.338790744Z" level=info msg="Started container" PID=2134 containerID=226437d96c8961fbce6c260727e2f6caa7cbebdf6560b7210365593b2a822e9f description=kube-system/coredns-5dd5756b68-nxwdc/coredns id=eb447c5c-3822-4439-9e41-08f594c8945b name=/runtime.v1.RuntimeService/StartContainer sandboxID=da173799b82dbde4f50bb6eed90df83747ff11e0b5a6db4796b647d96990dbca
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.422175313Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c6073039-4c3a-4dad-b881-10d0feee0771 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.422268186Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.427534239Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2aa60d87b0a92e5c6042ed0d633a737926f816452863e60d295325c2e4f60483 UID:1a0e8330-9dea-4063-9369-234ee8e6ef43 NetNS:/var/run/netns/bd2c8fd1-9c4f-4937-be48-5520e2425538 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002beed8}] Aliases:map[]}"
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.427559755Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.436347346Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2aa60d87b0a92e5c6042ed0d633a737926f816452863e60d295325c2e4f60483 UID:1a0e8330-9dea-4063-9369-234ee8e6ef43 NetNS:/var/run/netns/bd2c8fd1-9c4f-4937-be48-5520e2425538 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002beed8}] Aliases:map[]}"
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.436458578Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.437178164Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.438018827Z" level=info msg="Ran pod sandbox 2aa60d87b0a92e5c6042ed0d633a737926f816452863e60d295325c2e4f60483 with infra container: default/busybox/POD" id=c6073039-4c3a-4dad-b881-10d0feee0771 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.43893795Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3049342a-8d3e-47fb-b457-297ad38ba1bb name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.439052243Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3049342a-8d3e-47fb-b457-297ad38ba1bb name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.439100343Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3049342a-8d3e-47fb-b457-297ad38ba1bb name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.439631522Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3d110b37-cd91-4313-9b5f-e17e27c360b4 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.442844874Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.984741857Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=3d110b37-cd91-4313-9b5f-e17e27c360b4 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.985466046Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0f165f7e-4fc7-46c4-9d11-234ae52598d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.986704376Z" level=info msg="Creating container: default/busybox/busybox" id=43a1ec5e-ec12-46e6-908c-ca4ea0fc6c66 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.986821906Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.990559212Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:33:41 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:41.990984598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:33:42 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:42.022228234Z" level=info msg="Created container 179ed8642fdff959002e324242f401c51b32d3f7e9c5bcb0e9b2c6c0efb8968d: default/busybox/busybox" id=43a1ec5e-ec12-46e6-908c-ca4ea0fc6c66 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:33:42 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:42.022819889Z" level=info msg="Starting container: 179ed8642fdff959002e324242f401c51b32d3f7e9c5bcb0e9b2c6c0efb8968d" id=07bf54ac-e699-4f99-9666-322268dfd420 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:33:42 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:42.024762159Z" level=info msg="Started container" PID=2213 containerID=179ed8642fdff959002e324242f401c51b32d3f7e9c5bcb0e9b2c6c0efb8968d description=default/busybox/busybox id=07bf54ac-e699-4f99-9666-322268dfd420 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2aa60d87b0a92e5c6042ed0d633a737926f816452863e60d295325c2e4f60483
	Dec 12 00:33:49 old-k8s-version-743506 crio[767]: time="2025-12-12T00:33:49.192906812Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	179ed8642fdff       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   2aa60d87b0a92       busybox                                          default
	226437d96c896       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   da173799b82db       coredns-5dd5756b68-nxwdc                         kube-system
	362ab70c44a3a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   6f9ec5a09d552       storage-provisioner                              kube-system
	9c05350479bef       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   62ff0f7628aa5       kindnet-s2gvw                                    kube-system
	c6fec06dedbb9       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   84da9fba7c26f       kube-proxy-pz8kt                                 kube-system
	e6f65e4732f9b       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      44 seconds ago      Running             kube-apiserver            0                   e39555926aea7       kube-apiserver-old-k8s-version-743506            kube-system
	e3446dbac95c8       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      44 seconds ago      Running             kube-controller-manager   0                   9a0d159af3075       kube-controller-manager-old-k8s-version-743506   kube-system
	a67e1c4f3be72       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      44 seconds ago      Running             etcd                      0                   ef60a20203858       etcd-old-k8s-version-743506                      kube-system
	55edc7441a5f0       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      44 seconds ago      Running             kube-scheduler            0                   8757a8b8efcae       kube-scheduler-old-k8s-version-743506            kube-system
	
	
	==> coredns [226437d96c8961fbce6c260727e2f6caa7cbebdf6560b7210365593b2a822e9f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50157 - 22299 "HINFO IN 2217430359807296107.6625107494808435573. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079289882s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-743506
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-743506
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=old-k8s-version-743506
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_33_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:33:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-743506
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:33:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:33:41 +0000   Fri, 12 Dec 2025 00:33:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:33:41 +0000   Fri, 12 Dec 2025 00:33:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:33:41 +0000   Fri, 12 Dec 2025 00:33:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:33:41 +0000   Fri, 12 Dec 2025 00:33:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-743506
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                6e4a36d1-9d16-43c1-a591-2e531ad940c7
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-nxwdc                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-743506                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-s2gvw                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-743506             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-743506    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-pz8kt                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-743506             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-743506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-743506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-743506 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-743506 event: Registered Node old-k8s-version-743506 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-743506 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [a67e1c4f3be726ed8b768ee3c13ea31af274b9feecdbf22c3520881cfc7f819a] <==
	{"level":"info","ts":"2025-12-12T00:33:05.693734Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-12T00:33:05.697614Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-12T00:33:05.697716Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-12T00:33:05.697782Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-12T00:33:05.698282Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-12T00:33:05.69837Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-12T00:33:06.385192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-12T00:33:06.385241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-12T00:33:06.385261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-12-12T00:33:06.385277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-12-12T00:33:06.385286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-12T00:33:06.385297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-12-12T00:33:06.385308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-12T00:33:06.386076Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-12T00:33:06.386656Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-743506 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-12T00:33:06.386827Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-12T00:33:06.38684Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-12T00:33:06.386941Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-12T00:33:06.386968Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-12T00:33:06.386982Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-12T00:33:06.388219Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-12T00:33:06.388427Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-12T00:33:06.388496Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-12T00:33:06.38906Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-12T00:33:19.760056Z","caller":"traceutil/trace.go:171","msg":"trace[1643642686] transaction","detail":"{read_only:false; response_revision:278; number_of_response:1; }","duration":"207.78668ms","start":"2025-12-12T00:33:19.552248Z","end":"2025-12-12T00:33:19.760034Z","steps":["trace[1643642686] 'process raft request'  (duration: 207.650177ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:33:50 up  1:16,  0 user,  load average: 3.31, 2.55, 1.69
	Linux old-k8s-version-743506 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9c05350479bef1af7c92f57d9fd3336b7f8b625a5155614c2c0baf6567335ae1] <==
	I1212 00:33:27.380384       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 00:33:27.380626       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1212 00:33:27.380756       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:33:27.380772       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:33:27.380789       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:33:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:33:27.582064       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:33:27.582093       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:33:27.582104       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:33:27.582468       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 00:33:28.176020       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:33:28.176042       1 metrics.go:72] Registering metrics
	I1212 00:33:28.176099       1 controller.go:711] "Syncing nftables rules"
	I1212 00:33:37.582840       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:33:37.582889       1 main.go:301] handling current node
	I1212 00:33:47.585837       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:33:47.585872       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e6f65e4732f9b26db5e18f16df59fc3900a2c5131103453ddbb5fc0972383fb0] <==
	I1212 00:33:07.659522       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 00:33:07.658568       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 00:33:07.658581       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 00:33:07.659701       1 aggregator.go:166] initial CRD sync complete...
	I1212 00:33:07.659722       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 00:33:07.659730       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 00:33:07.659738       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:33:07.658706       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 00:33:07.659457       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 00:33:07.690667       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:33:08.557887       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 00:33:08.561272       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 00:33:08.561287       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 00:33:08.930809       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:33:08.961623       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:33:09.081453       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 00:33:09.087035       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1212 00:33:09.088075       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 00:33:09.091853       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:33:09.653069       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 00:33:10.277819       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 00:33:10.287780       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 00:33:10.296059       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 00:33:22.914044       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1212 00:33:23.069940       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e3446dbac95c8db5dd09c39624cb16d72923c2c166ed4e113207799fc52667ad] <==
	I1212 00:33:23.050288       1 shared_informer.go:318] Caches are synced for crt configmap
	I1212 00:33:23.051002       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1212 00:33:23.082303       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1212 00:33:23.101247       1 shared_informer.go:318] Caches are synced for attach detach
	I1212 00:33:23.120766       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 00:33:23.147850       1 shared_informer.go:318] Caches are synced for disruption
	I1212 00:33:23.159470       1 shared_informer.go:318] Caches are synced for stateful set
	I1212 00:33:23.171040       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 00:33:23.482202       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1212 00:33:23.488234       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 00:33:23.510375       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-sv9fp"
	I1212 00:33:23.517320       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-nxwdc"
	I1212 00:33:23.536431       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="452.766066ms"
	I1212 00:33:23.543807       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-sv9fp"
	I1212 00:33:23.547465       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 00:33:23.547544       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 00:33:23.555709       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.224982ms"
	I1212 00:33:23.562520       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.760953ms"
	I1212 00:33:23.562637       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.699µs"
	I1212 00:33:37.995337       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="114.136µs"
	I1212 00:33:38.010171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.889µs"
	I1212 00:33:38.441332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="136.189µs"
	I1212 00:33:39.445837       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.31363ms"
	I1212 00:33:39.446375       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.302µs"
	I1212 00:33:42.916298       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [c6fec06dedbb943eadc6aefff781edb124764e5a7e8ecabf0e84ac00b3749b5b] <==
	I1212 00:33:24.834338       1 server_others.go:69] "Using iptables proxy"
	I1212 00:33:24.844853       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1212 00:33:24.862940       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:33:24.865433       1 server_others.go:152] "Using iptables Proxier"
	I1212 00:33:24.865461       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 00:33:24.865468       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 00:33:24.865512       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 00:33:24.865728       1 server.go:846] "Version info" version="v1.28.0"
	I1212 00:33:24.865742       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:33:24.867102       1 config.go:188] "Starting service config controller"
	I1212 00:33:24.867142       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 00:33:24.867178       1 config.go:97] "Starting endpoint slice config controller"
	I1212 00:33:24.867190       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 00:33:24.867657       1 config.go:315] "Starting node config controller"
	I1212 00:33:24.867699       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:33:24.967543       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:33:24.968692       1 shared_informer.go:318] Caches are synced for node config
	I1212 00:33:24.968788       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [55edc7441a5f0d712a09e86aa1da6014c1d756a9d1c29ef672c4a1db85bdf02e] <==
	W1212 00:33:07.680454       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 00:33:07.680470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 00:33:07.680548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 00:33:07.680562       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 00:33:07.684038       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 00:33:07.684073       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 00:33:07.684234       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 00:33:07.684260       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 00:33:07.684284       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 00:33:07.684335       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 00:33:07.684360       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 00:33:07.684386       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 00:33:07.684459       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 00:33:07.684500       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 00:33:07.684517       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 00:33:07.684545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 00:33:07.684582       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 00:33:07.684604       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 00:33:07.690930       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 00:33:07.690957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 00:33:08.534591       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 00:33:08.534622       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 00:33:08.768173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 00:33:08.768216       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1212 00:33:09.068781       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 12 00:33:23 old-k8s-version-743506 kubelet[1396]: I1212 00:33:23.018916    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/21d0881b-1da3-4a1d-967d-8f108d5d8a1f-cni-cfg\") pod \"kindnet-s2gvw\" (UID: \"21d0881b-1da3-4a1d-967d-8f108d5d8a1f\") " pod="kube-system/kindnet-s2gvw"
	Dec 12 00:33:23 old-k8s-version-743506 kubelet[1396]: I1212 00:33:23.018955    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21d0881b-1da3-4a1d-967d-8f108d5d8a1f-lib-modules\") pod \"kindnet-s2gvw\" (UID: \"21d0881b-1da3-4a1d-967d-8f108d5d8a1f\") " pod="kube-system/kindnet-s2gvw"
	Dec 12 00:33:23 old-k8s-version-743506 kubelet[1396]: I1212 00:33:23.018989    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/671c52f4-19ce-4b7b-8e97-e43f64cd4aeb-lib-modules\") pod \"kube-proxy-pz8kt\" (UID: \"671c52f4-19ce-4b7b-8e97-e43f64cd4aeb\") " pod="kube-system/kube-proxy-pz8kt"
	Dec 12 00:33:24 old-k8s-version-743506 kubelet[1396]: E1212 00:33:24.122785    1396 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 00:33:24 old-k8s-version-743506 kubelet[1396]: E1212 00:33:24.122949    1396 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/671c52f4-19ce-4b7b-8e97-e43f64cd4aeb-kube-proxy podName:671c52f4-19ce-4b7b-8e97-e43f64cd4aeb nodeName:}" failed. No retries permitted until 2025-12-12 00:33:24.622909739 +0000 UTC m=+14.369300817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/671c52f4-19ce-4b7b-8e97-e43f64cd4aeb-kube-proxy") pod "kube-proxy-pz8kt" (UID: "671c52f4-19ce-4b7b-8e97-e43f64cd4aeb") : failed to sync configmap cache: timed out waiting for the condition
	Dec 12 00:33:24 old-k8s-version-743506 kubelet[1396]: E1212 00:33:24.134380    1396 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 00:33:24 old-k8s-version-743506 kubelet[1396]: E1212 00:33:24.134418    1396 projected.go:198] Error preparing data for projected volume kube-api-access-kk2dj for pod kube-system/kube-proxy-pz8kt: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 00:33:24 old-k8s-version-743506 kubelet[1396]: E1212 00:33:24.134497    1396 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/671c52f4-19ce-4b7b-8e97-e43f64cd4aeb-kube-api-access-kk2dj podName:671c52f4-19ce-4b7b-8e97-e43f64cd4aeb nodeName:}" failed. No retries permitted until 2025-12-12 00:33:24.634466742 +0000 UTC m=+14.380857801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kk2dj" (UniqueName: "kubernetes.io/projected/671c52f4-19ce-4b7b-8e97-e43f64cd4aeb-kube-api-access-kk2dj") pod "kube-proxy-pz8kt" (UID: "671c52f4-19ce-4b7b-8e97-e43f64cd4aeb") : failed to sync configmap cache: timed out waiting for the condition
	Dec 12 00:33:24 old-k8s-version-743506 kubelet[1396]: E1212 00:33:24.137826    1396 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 00:33:24 old-k8s-version-743506 kubelet[1396]: E1212 00:33:24.137855    1396 projected.go:198] Error preparing data for projected volume kube-api-access-6lw5z for pod kube-system/kindnet-s2gvw: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 00:33:24 old-k8s-version-743506 kubelet[1396]: E1212 00:33:24.137916    1396 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d0881b-1da3-4a1d-967d-8f108d5d8a1f-kube-api-access-6lw5z podName:21d0881b-1da3-4a1d-967d-8f108d5d8a1f nodeName:}" failed. No retries permitted until 2025-12-12 00:33:24.637897521 +0000 UTC m=+14.384288575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6lw5z" (UniqueName: "kubernetes.io/projected/21d0881b-1da3-4a1d-967d-8f108d5d8a1f-kube-api-access-6lw5z") pod "kindnet-s2gvw" (UID: "21d0881b-1da3-4a1d-967d-8f108d5d8a1f") : failed to sync configmap cache: timed out waiting for the condition
	Dec 12 00:33:25 old-k8s-version-743506 kubelet[1396]: I1212 00:33:25.407947    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pz8kt" podStartSLOduration=3.4078973169999998 podCreationTimestamp="2025-12-12 00:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:33:25.407889952 +0000 UTC m=+15.154281021" watchObservedRunningTime="2025-12-12 00:33:25.407897317 +0000 UTC m=+15.154288385"
	Dec 12 00:33:27 old-k8s-version-743506 kubelet[1396]: I1212 00:33:27.409451    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-s2gvw" podStartSLOduration=2.932715042 podCreationTimestamp="2025-12-12 00:33:22 +0000 UTC" firstStartedPulling="2025-12-12 00:33:24.755693118 +0000 UTC m=+14.502084178" lastFinishedPulling="2025-12-12 00:33:27.232379986 +0000 UTC m=+16.978771041" observedRunningTime="2025-12-12 00:33:27.409269849 +0000 UTC m=+17.155660916" watchObservedRunningTime="2025-12-12 00:33:27.409401905 +0000 UTC m=+17.155792967"
	Dec 12 00:33:37 old-k8s-version-743506 kubelet[1396]: I1212 00:33:37.972280    1396 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 12 00:33:37 old-k8s-version-743506 kubelet[1396]: I1212 00:33:37.991980    1396 topology_manager.go:215] "Topology Admit Handler" podUID="ccc4d4a0-b9c6-4653-90dc-113128acc782" podNamespace="kube-system" podName="storage-provisioner"
	Dec 12 00:33:37 old-k8s-version-743506 kubelet[1396]: I1212 00:33:37.995467    1396 topology_manager.go:215] "Topology Admit Handler" podUID="e73711a2-208b-41a6-a47f-6253638cfdf2" podNamespace="kube-system" podName="coredns-5dd5756b68-nxwdc"
	Dec 12 00:33:38 old-k8s-version-743506 kubelet[1396]: I1212 00:33:38.132533    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pxgh\" (UniqueName: \"kubernetes.io/projected/e73711a2-208b-41a6-a47f-6253638cfdf2-kube-api-access-2pxgh\") pod \"coredns-5dd5756b68-nxwdc\" (UID: \"e73711a2-208b-41a6-a47f-6253638cfdf2\") " pod="kube-system/coredns-5dd5756b68-nxwdc"
	Dec 12 00:33:38 old-k8s-version-743506 kubelet[1396]: I1212 00:33:38.132591    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f79lr\" (UniqueName: \"kubernetes.io/projected/ccc4d4a0-b9c6-4653-90dc-113128acc782-kube-api-access-f79lr\") pod \"storage-provisioner\" (UID: \"ccc4d4a0-b9c6-4653-90dc-113128acc782\") " pod="kube-system/storage-provisioner"
	Dec 12 00:33:38 old-k8s-version-743506 kubelet[1396]: I1212 00:33:38.132690    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ccc4d4a0-b9c6-4653-90dc-113128acc782-tmp\") pod \"storage-provisioner\" (UID: \"ccc4d4a0-b9c6-4653-90dc-113128acc782\") " pod="kube-system/storage-provisioner"
	Dec 12 00:33:38 old-k8s-version-743506 kubelet[1396]: I1212 00:33:38.132754    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e73711a2-208b-41a6-a47f-6253638cfdf2-config-volume\") pod \"coredns-5dd5756b68-nxwdc\" (UID: \"e73711a2-208b-41a6-a47f-6253638cfdf2\") " pod="kube-system/coredns-5dd5756b68-nxwdc"
	Dec 12 00:33:38 old-k8s-version-743506 kubelet[1396]: I1212 00:33:38.440835    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.440756772 podCreationTimestamp="2025-12-12 00:33:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:33:38.432616863 +0000 UTC m=+28.179007931" watchObservedRunningTime="2025-12-12 00:33:38.440756772 +0000 UTC m=+28.187147840"
	Dec 12 00:33:38 old-k8s-version-743506 kubelet[1396]: I1212 00:33:38.440953    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-nxwdc" podStartSLOduration=15.44092282 podCreationTimestamp="2025-12-12 00:33:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:33:38.440744044 +0000 UTC m=+28.187135112" watchObservedRunningTime="2025-12-12 00:33:38.44092282 +0000 UTC m=+28.187313888"
	Dec 12 00:33:41 old-k8s-version-743506 kubelet[1396]: I1212 00:33:41.120792    1396 topology_manager.go:215] "Topology Admit Handler" podUID="1a0e8330-9dea-4063-9369-234ee8e6ef43" podNamespace="default" podName="busybox"
	Dec 12 00:33:41 old-k8s-version-743506 kubelet[1396]: I1212 00:33:41.252621    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72247\" (UniqueName: \"kubernetes.io/projected/1a0e8330-9dea-4063-9369-234ee8e6ef43-kube-api-access-72247\") pod \"busybox\" (UID: \"1a0e8330-9dea-4063-9369-234ee8e6ef43\") " pod="default/busybox"
	Dec 12 00:33:42 old-k8s-version-743506 kubelet[1396]: I1212 00:33:42.446244    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.900421602 podCreationTimestamp="2025-12-12 00:33:41 +0000 UTC" firstStartedPulling="2025-12-12 00:33:41.439243497 +0000 UTC m=+31.185634554" lastFinishedPulling="2025-12-12 00:33:41.984999093 +0000 UTC m=+31.731390142" observedRunningTime="2025-12-12 00:33:42.445628402 +0000 UTC m=+32.192019470" watchObservedRunningTime="2025-12-12 00:33:42.44617719 +0000 UTC m=+32.192568255"
	
	
	==> storage-provisioner [362ab70c44a3aa585353e516d667698d165136d30aeda973da8a24800fc1fec5] <==
	I1212 00:33:38.345292       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:33:38.352379       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:33:38.352430       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 00:33:38.358455       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:33:38.358627       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-743506_b39004e4-98ad-4c88-9eb3-80e5be331f5f!
	I1212 00:33:38.358563       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"650d9cda-f8d6-4a81-8b1f-4e060e712651", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-743506_b39004e4-98ad-4c88-9eb3-80e5be331f5f became leader
	I1212 00:33:38.459508       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-743506_b39004e4-98ad-4c88-9eb3-80e5be331f5f!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-743506 -n old-k8s-version-743506
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-743506 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-675290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-675290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (234.794833ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:33:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-675290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-675290 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-675290 describe deploy/metrics-server -n kube-system: exit status 1 (54.833177ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-675290 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-675290
helpers_test.go:244: (dbg) docker inspect no-preload-675290:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d",
	        "Created": "2025-12-12T00:32:58.309247922Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 273750,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:32:58.34118007Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d/hostname",
	        "HostsPath": "/var/lib/docker/containers/822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d/hosts",
	        "LogPath": "/var/lib/docker/containers/822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d/822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d-json.log",
	        "Name": "/no-preload-675290",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-675290:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-675290",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d",
	                "LowerDir": "/var/lib/docker/overlay2/526e2f5333a3d044d97b913738c91ecc06a6a6ea28306358c63bf56b192a3e75-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/526e2f5333a3d044d97b913738c91ecc06a6a6ea28306358c63bf56b192a3e75/merged",
	                "UpperDir": "/var/lib/docker/overlay2/526e2f5333a3d044d97b913738c91ecc06a6a6ea28306358c63bf56b192a3e75/diff",
	                "WorkDir": "/var/lib/docker/overlay2/526e2f5333a3d044d97b913738c91ecc06a6a6ea28306358c63bf56b192a3e75/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-675290",
	                "Source": "/var/lib/docker/volumes/no-preload-675290/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-675290",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-675290",
	                "name.minikube.sigs.k8s.io": "no-preload-675290",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0e2082bccc87a851a831be04ae924858b7fc3885f209d68bade9780ac3e8ddc1",
	            "SandboxKey": "/var/run/docker/netns/0e2082bccc87",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-675290": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f766d8223619c67c6480629ff7786cf2f3559f1e416095164a10f67db0a3ed9d",
	                    "EndpointID": "12a6ede7bcb3c94d0bd7e23d8a35bd8dd1c12173664093953e90a0265a7af258",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "8e:0f:e4:d6:24:4c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-675290",
	                        "822239fdcf28"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675290 -n no-preload-675290
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-675290 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p NoKubernetes-131237 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-131237       │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │                     │
	│ delete  │ -p NoKubernetes-131237                                                                                                                                                                                                                        │ NoKubernetes-131237       │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:30 UTC │
	│ start   │ -p force-systemd-flag-610815 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-610815 │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:30 UTC │
	│ ssh     │ force-systemd-flag-610815 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-610815 │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:30 UTC │
	│ delete  │ -p force-systemd-flag-610815                                                                                                                                                                                                                  │ force-systemd-flag-610815 │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:30 UTC │
	│ start   │ -p missing-upgrade-038405 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-038405    │ jenkins │ v1.35.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:31 UTC │
	│ start   │ -p missing-upgrade-038405 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-038405    │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ delete  │ -p missing-upgrade-038405                                                                                                                                                                                                                     │ missing-upgrade-038405    │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ start   │ -p kubernetes-upgrade-605797 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-605797 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:32 UTC │
	│ stop    │ -p kubernetes-upgrade-605797                                                                                                                                                                                                                  │ kubernetes-upgrade-605797 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p kubernetes-upgrade-605797 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-605797 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │                     │
	│ delete  │ -p stopped-upgrade-148693                                                                                                                                                                                                                     │ stopped-upgrade-148693    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p cert-options-319518 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-319518       │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ start   │ -p cert-expiration-673665 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-673665    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ delete  │ -p running-upgrade-299658                                                                                                                                                                                                                     │ running-upgrade-299658    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-743506    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ delete  │ -p cert-expiration-673665                                                                                                                                                                                                                     │ cert-expiration-673665    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-675290         │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ cert-options-319518 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-319518       │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ -p cert-options-319518 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-319518       │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ delete  │ -p cert-options-319518                                                                                                                                                                                                                        │ cert-options-319518       │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-858659        │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-743506 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-743506    │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ stop    │ -p old-k8s-version-743506 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-743506    │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-675290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-675290         │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:33:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:33:14.727394  278144 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:33:14.727527  278144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:33:14.727540  278144 out.go:374] Setting ErrFile to fd 2...
	I1212 00:33:14.727548  278144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:33:14.727835  278144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:33:14.728471  278144 out.go:368] Setting JSON to false
	I1212 00:33:14.730016  278144 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4541,"bootTime":1765495054,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:33:14.730096  278144 start.go:143] virtualization: kvm guest
	I1212 00:33:14.732300  278144 out.go:179] * [embed-certs-858659] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:33:14.733495  278144 notify.go:221] Checking for updates...
	I1212 00:33:14.733499  278144 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:33:14.734863  278144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:33:14.736177  278144 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:33:14.737704  278144 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:33:14.738840  278144 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:33:14.739929  278144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:33:14.741610  278144 config.go:182] Loaded profile config "kubernetes-upgrade-605797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:33:14.741755  278144 config.go:182] Loaded profile config "no-preload-675290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:33:14.741905  278144 config.go:182] Loaded profile config "old-k8s-version-743506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1212 00:33:14.742081  278144 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:33:14.769409  278144 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:33:14.769547  278144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:33:14.824953  278144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-12 00:33:14.815394412 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:33:14.825066  278144 docker.go:319] overlay module found
	I1212 00:33:14.827615  278144 out.go:179] * Using the docker driver based on user configuration
	I1212 00:33:14.828650  278144 start.go:309] selected driver: docker
	I1212 00:33:14.828664  278144 start.go:927] validating driver "docker" against <nil>
	I1212 00:33:14.828675  278144 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:33:14.829458  278144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:33:14.887224  278144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-12 00:33:14.877162845 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:33:14.887403  278144 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 00:33:14.887687  278144 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:33:14.891675  278144 out.go:179] * Using Docker driver with root privileges
	I1212 00:33:14.892895  278144 cni.go:84] Creating CNI manager for ""
	I1212 00:33:14.893000  278144 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:33:14.893016  278144 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:33:14.893119  278144 start.go:353] cluster config:
	{Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:33:14.894469  278144 out.go:179] * Starting "embed-certs-858659" primary control-plane node in "embed-certs-858659" cluster
	I1212 00:33:14.895585  278144 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:33:14.896800  278144 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:33:14.897761  278144 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:33:14.897795  278144 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 00:33:14.897814  278144 cache.go:65] Caching tarball of preloaded images
	I1212 00:33:14.897820  278144 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:33:14.897914  278144 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:33:14.897930  278144 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 00:33:14.898070  278144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/config.json ...
	I1212 00:33:14.898101  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/config.json: {Name:mka3ad5a51f2e77701ec67a66227f8bb0b6994ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:14.921669  278144 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:33:14.921689  278144 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:33:14.921708  278144 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:33:14.921743  278144 start.go:360] acquireMachinesLock for embed-certs-858659: {Name:mk65733daa8eb01c9a3ad2d27b0888c2a1a8b319 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:33:14.921849  278144 start.go:364] duration metric: took 84.758µs to acquireMachinesLock for "embed-certs-858659"
	I1212 00:33:14.921880  278144 start.go:93] Provisioning new machine with config: &{Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:33:14.921967  278144 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:33:11.914966  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:12.414641  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:12.914599  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:13.414343  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:13.914224  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:14.414638  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:14.914709  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:15.414947  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:15.915234  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:16.414519  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:14.795605  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 00:33:14.795652  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:12.673526  272590 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.421983552s)
	I1212 00:33:12.673554  272590 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-10975/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1212 00:33:12.673588  272590 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 00:33:12.673653  272590 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 00:33:14.248095  272590 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.574416538s)
	I1212 00:33:14.248127  272590 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-10975/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1212 00:33:14.248166  272590 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 00:33:14.248233  272590 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 00:33:14.813113  272590 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-10975/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 00:33:14.813155  272590 cache_images.go:125] Successfully loaded all cached images
	I1212 00:33:14.813162  272590 cache_images.go:94] duration metric: took 10.342276233s to LoadCachedImages
	I1212 00:33:14.813176  272590 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 00:33:14.813282  272590 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-675290 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-675290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:33:14.813361  272590 ssh_runner.go:195] Run: crio config
	I1212 00:33:14.863560  272590 cni.go:84] Creating CNI manager for ""
	I1212 00:33:14.863584  272590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:33:14.863605  272590 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:33:14.863636  272590 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-675290 NodeName:no-preload-675290 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:33:14.863772  272590 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-675290"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:33:14.863848  272590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:33:14.873217  272590 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1212 00:33:14.873288  272590 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:33:14.881909  272590 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1212 00:33:14.881961  272590 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22101-10975/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1212 00:33:14.882020  272590 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1212 00:33:14.881963  272590 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22101-10975/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1212 00:33:14.886285  272590 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1212 00:33:14.886316  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1212 00:33:15.884197  272590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:33:15.898090  272590 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1212 00:33:15.901895  272590 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1212 00:33:15.901925  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1212 00:33:16.266026  272590 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1212 00:33:16.270813  272590 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1212 00:33:16.270848  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1212 00:33:16.443667  272590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:33:16.461915  272590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:33:16.477224  272590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:33:16.496004  272590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1212 00:33:16.509653  272590 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:33:16.513544  272590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:33:16.524548  272590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:16.615408  272590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:16.649109  272590 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290 for IP: 192.168.76.2
	I1212 00:33:16.649132  272590 certs.go:195] generating shared ca certs ...
	I1212 00:33:16.649153  272590 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:16.649332  272590 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:33:16.649377  272590 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:33:16.649387  272590 certs.go:257] generating profile certs ...
	I1212 00:33:16.649448  272590 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.key
	I1212 00:33:16.649462  272590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.crt with IP's: []
	I1212 00:33:16.748107  272590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.crt ...
	I1212 00:33:16.748134  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.crt: {Name:mk00022ca9e5428de7e5a583050d69c3c5c2bf31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:16.748331  272590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.key ...
	I1212 00:33:16.748345  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.key: {Name:mkd4f1314753e5364a38754983dd2956364020bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:16.748457  272590 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key.56c2be46
	I1212 00:33:16.748482  272590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt.56c2be46 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1212 00:33:17.118681  272590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt.56c2be46 ...
	I1212 00:33:17.118712  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt.56c2be46: {Name:mk0a1028bc5d92431abb55b4e7c2d66cfbf9c8a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:17.118911  272590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key.56c2be46 ...
	I1212 00:33:17.118935  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key.56c2be46: {Name:mk90ce7b5c7ba44e4d4cdb05bfe31ea45a556159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:17.119082  272590 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt.56c2be46 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt
	I1212 00:33:17.119258  272590 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key.56c2be46 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key
	I1212 00:33:17.119356  272590 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.key
	I1212 00:33:17.119383  272590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.crt with IP's: []
	I1212 00:33:17.206397  272590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.crt ...
	I1212 00:33:17.206424  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.crt: {Name:mk4dba249fcf82c557c21e700f31c0a67e228b72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:17.206613  272590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.key ...
	I1212 00:33:17.206637  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.key: {Name:mkd43d2f214a353e18ef7df608cd9a29775c0278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:17.206850  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:33:17.206900  272590 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:33:17.206928  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:33:17.206975  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:33:17.207014  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:33:17.207055  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:33:17.207133  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:33:17.207775  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:33:17.225767  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:33:17.243136  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:33:17.260318  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:33:17.277081  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:33:17.294021  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:33:17.311412  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:33:17.329281  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:33:17.346588  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:33:14.923847  278144 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 00:33:14.924337  278144 start.go:159] libmachine.API.Create for "embed-certs-858659" (driver="docker")
	I1212 00:33:14.924390  278144 client.go:173] LocalClient.Create starting
	I1212 00:33:14.924524  278144 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem
	I1212 00:33:14.924577  278144 main.go:143] libmachine: Decoding PEM data...
	I1212 00:33:14.924604  278144 main.go:143] libmachine: Parsing certificate...
	I1212 00:33:14.924685  278144 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem
	I1212 00:33:14.924718  278144 main.go:143] libmachine: Decoding PEM data...
	I1212 00:33:14.924746  278144 main.go:143] libmachine: Parsing certificate...
	I1212 00:33:14.925302  278144 cli_runner.go:164] Run: docker network inspect embed-certs-858659 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:33:14.950174  278144 cli_runner.go:211] docker network inspect embed-certs-858659 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:33:14.950269  278144 network_create.go:284] running [docker network inspect embed-certs-858659] to gather additional debugging logs...
	I1212 00:33:14.950295  278144 cli_runner.go:164] Run: docker network inspect embed-certs-858659
	W1212 00:33:14.974761  278144 cli_runner.go:211] docker network inspect embed-certs-858659 returned with exit code 1
	I1212 00:33:14.974794  278144 network_create.go:287] error running [docker network inspect embed-certs-858659]: docker network inspect embed-certs-858659: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-858659 not found
	I1212 00:33:14.974816  278144 network_create.go:289] output of [docker network inspect embed-certs-858659]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-858659 not found
	
	** /stderr **
	I1212 00:33:14.975003  278144 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:33:15.001326  278144 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ad72354fdcf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:1e:a7:00:22:62} reservation:<nil>}
	I1212 00:33:15.002281  278144 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-18bf2e8051c8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:8e:8f:a6:9d:0c} reservation:<nil>}
	I1212 00:33:15.003308  278144 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e5251ddf0a35 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:5a:cf:fd:1d:2a} reservation:<nil>}
	I1212 00:33:15.004265  278144 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f766d8223619 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:f6:b4:47:2f:69:da} reservation:<nil>}
	I1212 00:33:15.005079  278144 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f077d203a2ba IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:8e:69:f0:f2:3a:5d} reservation:<nil>}
	I1212 00:33:15.006240  278144 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb2ab0}
	I1212 00:33:15.006276  278144 network_create.go:124] attempt to create docker network embed-certs-858659 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1212 00:33:15.006349  278144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-858659 embed-certs-858659
	I1212 00:33:15.070104  278144 network_create.go:108] docker network embed-certs-858659 192.168.94.0/24 created
	I1212 00:33:15.070132  278144 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-858659" container
	I1212 00:33:15.070189  278144 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:33:15.087641  278144 cli_runner.go:164] Run: docker volume create embed-certs-858659 --label name.minikube.sigs.k8s.io=embed-certs-858659 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:33:15.104140  278144 oci.go:103] Successfully created a docker volume embed-certs-858659
	I1212 00:33:15.104214  278144 cli_runner.go:164] Run: docker run --rm --name embed-certs-858659-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-858659 --entrypoint /usr/bin/test -v embed-certs-858659:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 00:33:16.284491  278144 cli_runner.go:217] Completed: docker run --rm --name embed-certs-858659-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-858659 --entrypoint /usr/bin/test -v embed-certs-858659:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.180222233s)
	I1212 00:33:16.284534  278144 oci.go:107] Successfully prepared a docker volume embed-certs-858659
	I1212 00:33:16.284589  278144 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:33:16.284601  278144 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:33:16.284643  278144 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-858659:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:33:16.914882  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:17.414926  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:17.914528  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:18.414943  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:18.914927  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:19.414842  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:19.914278  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:20.414704  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:20.914717  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:21.414504  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:19.592618  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:49408->192.168.85.2:8443: read: connection reset by peer
	I1212 00:33:19.592823  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:19.593231  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:19.791619  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:19.792087  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:20.291785  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:20.292172  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:20.791618  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:20.791974  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:21.291768  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:21.292177  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:21.790791  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:21.791240  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:17.366333  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:33:17.384570  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:33:17.401944  272590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:33:17.414391  272590 ssh_runner.go:195] Run: openssl version
	I1212 00:33:17.421380  272590 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:33:17.428974  272590 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:33:17.436624  272590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:33:17.440514  272590 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:33:17.440571  272590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:33:17.476855  272590 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:33:17.485655  272590 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:33:17.494695  272590 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:33:17.502643  272590 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:33:17.510565  272590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:33:17.514372  272590 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:33:17.514428  272590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:33:17.550028  272590 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:33:17.558154  272590 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:33:17.566696  272590 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:17.574819  272590 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:33:17.584533  272590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:17.588378  272590 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:17.588451  272590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:17.624118  272590 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:33:17.631989  272590 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:33:17.639398  272590 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:33:17.643488  272590 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:33:17.643544  272590 kubeadm.go:401] StartCluster: {Name:no-preload-675290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-675290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:33:17.643628  272590 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:33:17.643690  272590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:33:17.673075  272590 cri.go:89] found id: ""
	I1212 00:33:17.673155  272590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:33:17.681463  272590 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:33:17.689858  272590 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:33:17.689918  272590 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:33:17.697594  272590 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:33:17.697617  272590 kubeadm.go:158] found existing configuration files:
	
	I1212 00:33:17.697658  272590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:33:17.705097  272590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:33:17.705142  272590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:33:17.712562  272590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:33:17.720683  272590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:33:17.720733  272590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:33:17.728201  272590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:33:17.735840  272590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:33:17.735895  272590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:33:17.743333  272590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:33:17.750913  272590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:33:17.750963  272590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:33:17.757854  272590 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:33:17.864943  272590 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 00:33:17.921915  272590 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:33:21.914513  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:22.415154  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:22.914233  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:23.013859  270803 kubeadm.go:1114] duration metric: took 11.693512767s to wait for elevateKubeSystemPrivileges
	I1212 00:33:23.013899  270803 kubeadm.go:403] duration metric: took 21.39235223s to StartCluster
	I1212 00:33:23.013922  270803 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:23.014005  270803 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:33:23.015046  270803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:23.015303  270803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:33:23.015318  270803 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:33:23.015380  270803 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:33:23.015468  270803 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-743506"
	I1212 00:33:23.015507  270803 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-743506"
	I1212 00:33:23.015517  270803 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-743506"
	I1212 00:33:23.015542  270803 host.go:66] Checking if "old-k8s-version-743506" exists ...
	I1212 00:33:23.015545  270803 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-743506"
	I1212 00:33:23.015549  270803 config.go:182] Loaded profile config "old-k8s-version-743506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1212 00:33:23.015961  270803 cli_runner.go:164] Run: docker container inspect old-k8s-version-743506 --format={{.State.Status}}
	I1212 00:33:23.016070  270803 cli_runner.go:164] Run: docker container inspect old-k8s-version-743506 --format={{.State.Status}}
	I1212 00:33:23.016790  270803 out.go:179] * Verifying Kubernetes components...
	I1212 00:33:23.018074  270803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:23.042393  270803 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-743506"
	I1212 00:33:23.042558  270803 host.go:66] Checking if "old-k8s-version-743506" exists ...
	I1212 00:33:23.042986  270803 cli_runner.go:164] Run: docker container inspect old-k8s-version-743506 --format={{.State.Status}}
	I1212 00:33:23.043608  270803 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:33:23.044740  270803 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:23.044768  270803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:33:23.044818  270803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743506
	I1212 00:33:23.079331  270803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/old-k8s-version-743506/id_rsa Username:docker}
	I1212 00:33:23.080872  270803 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:23.080897  270803 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:33:23.080957  270803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743506
	I1212 00:33:23.105874  270803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/old-k8s-version-743506/id_rsa Username:docker}
	I1212 00:33:23.153068  270803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:33:23.193030  270803 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:23.203058  270803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:23.226507  270803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:23.451216  270803 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1212 00:33:23.452428  270803 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-743506" to be "Ready" ...
	I1212 00:33:23.654056  270803 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 00:33:20.370047  278144 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-858659:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.085338961s)
	I1212 00:33:20.370083  278144 kic.go:203] duration metric: took 4.085477303s to extract preloaded images to volume ...
	W1212 00:33:20.370184  278144 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 00:33:20.370229  278144 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 00:33:20.370279  278144 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:33:20.431590  278144 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-858659 --name embed-certs-858659 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-858659 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-858659 --network embed-certs-858659 --ip 192.168.94.2 --volume embed-certs-858659:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 00:33:20.738217  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Running}}
	I1212 00:33:20.759182  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:20.778854  278144 cli_runner.go:164] Run: docker exec embed-certs-858659 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:33:20.828668  278144 oci.go:144] the created container "embed-certs-858659" has a running status.
	I1212 00:33:20.828706  278144 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa...
	I1212 00:33:20.965886  278144 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:33:20.996346  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:21.014956  278144 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:33:21.014979  278144 kic_runner.go:114] Args: [docker exec --privileged embed-certs-858659 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:33:21.082712  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:21.106586  278144 machine.go:94] provisionDockerMachine start ...
	I1212 00:33:21.106678  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:21.131359  278144 main.go:143] libmachine: Using SSH client type: native
	I1212 00:33:21.131707  278144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1212 00:33:21.131730  278144 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:33:21.276119  278144 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-858659
	
	I1212 00:33:21.276147  278144 ubuntu.go:182] provisioning hostname "embed-certs-858659"
	I1212 00:33:21.276210  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:21.297386  278144 main.go:143] libmachine: Using SSH client type: native
	I1212 00:33:21.297706  278144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1212 00:33:21.297733  278144 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-858659 && echo "embed-certs-858659" | sudo tee /etc/hostname
	I1212 00:33:21.451952  278144 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-858659
	
	I1212 00:33:21.452044  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:21.476325  278144 main.go:143] libmachine: Using SSH client type: native
	I1212 00:33:21.476652  278144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1212 00:33:21.476682  278144 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-858659' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-858659/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-858659' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:33:21.621125  278144 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:33:21.621157  278144 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:33:21.621206  278144 ubuntu.go:190] setting up certificates
	I1212 00:33:21.621218  278144 provision.go:84] configureAuth start
	I1212 00:33:21.621282  278144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:33:21.642072  278144 provision.go:143] copyHostCerts
	I1212 00:33:21.642136  278144 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:33:21.642150  278144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:33:21.642232  278144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:33:21.642360  278144 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:33:21.642374  278144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:33:21.642414  278144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:33:21.642534  278144 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:33:21.642548  278144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:33:21.642588  278144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:33:21.642676  278144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.embed-certs-858659 san=[127.0.0.1 192.168.94.2 embed-certs-858659 localhost minikube]
	I1212 00:33:21.788738  278144 provision.go:177] copyRemoteCerts
	I1212 00:33:21.788793  278144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:33:21.788830  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:21.806536  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:21.918035  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:33:21.945961  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:33:21.965649  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 00:33:21.983776  278144 provision.go:87] duration metric: took 362.534714ms to configureAuth
	I1212 00:33:21.983806  278144 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:33:21.984002  278144 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:33:21.984122  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:22.014438  278144 main.go:143] libmachine: Using SSH client type: native
	I1212 00:33:22.014755  278144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1212 00:33:22.014780  278144 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:33:22.307190  278144 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:33:22.307211  278144 machine.go:97] duration metric: took 1.200604199s to provisionDockerMachine
	I1212 00:33:22.307228  278144 client.go:176] duration metric: took 7.38282296s to LocalClient.Create
	I1212 00:33:22.307253  278144 start.go:167] duration metric: took 7.38291887s to libmachine.API.Create "embed-certs-858659"
	I1212 00:33:22.307266  278144 start.go:293] postStartSetup for "embed-certs-858659" (driver="docker")
	I1212 00:33:22.307280  278144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:33:22.307346  278144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:33:22.307394  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:22.325538  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:22.424001  278144 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:33:22.427488  278144 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:33:22.427521  278144 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:33:22.427537  278144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:33:22.427586  278144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:33:22.427662  278144 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:33:22.427750  278144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:33:22.435119  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:33:22.455192  278144 start.go:296] duration metric: took 147.912888ms for postStartSetup
	I1212 00:33:22.455613  278144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:33:22.474728  278144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/config.json ...
	I1212 00:33:22.474959  278144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:33:22.474994  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:22.492769  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:22.583869  278144 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:33:22.588020  278144 start.go:128] duration metric: took 7.666039028s to createHost
	I1212 00:33:22.588046  278144 start.go:83] releasing machines lock for "embed-certs-858659", held for 7.666181656s
	I1212 00:33:22.588106  278144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:33:22.604658  278144 ssh_runner.go:195] Run: cat /version.json
	I1212 00:33:22.604702  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:22.604722  278144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:33:22.604782  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:22.624351  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:22.624645  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:22.766960  278144 ssh_runner.go:195] Run: systemctl --version
	I1212 00:33:22.773040  278144 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:33:22.807594  278144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:33:22.813502  278144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:33:22.813578  278144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:33:22.848889  278144 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:33:22.848911  278144 start.go:496] detecting cgroup driver to use...
	I1212 00:33:22.848946  278144 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:33:22.849004  278144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:33:22.866214  278144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:33:22.883089  278144 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:33:22.883154  278144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:33:22.903153  278144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:33:22.928607  278144 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:33:23.051147  278144 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:33:23.207194  278144 docker.go:234] disabling docker service ...
	I1212 00:33:23.207354  278144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:33:23.232382  278144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:33:23.250204  278144 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:33:23.374364  278144 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:33:23.499465  278144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:33:23.517532  278144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:33:23.538077  278144 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:33:23.538180  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.551014  278144 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:33:23.551078  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.562817  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.574022  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.585051  278144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:33:23.594447  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.605211  278144 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.621829  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.632339  278144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:33:23.642118  278144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:33:23.651626  278144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:23.735724  278144 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:33:23.880406  278144 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:33:23.880467  278144 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:33:23.884392  278144 start.go:564] Will wait 60s for crictl version
	I1212 00:33:23.884439  278144 ssh_runner.go:195] Run: which crictl
	I1212 00:33:23.887821  278144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:33:23.911824  278144 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:33:23.911892  278144 ssh_runner.go:195] Run: crio --version
	I1212 00:33:23.938726  278144 ssh_runner.go:195] Run: crio --version
	I1212 00:33:23.967016  278144 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:33:23.968117  278144 cli_runner.go:164] Run: docker network inspect embed-certs-858659 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:33:23.986276  278144 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1212 00:33:23.990186  278144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:33:24.000018  278144 kubeadm.go:884] updating cluster {Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:33:24.000130  278144 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:33:24.000180  278144 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:33:24.032804  278144 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:33:24.032832  278144 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:33:24.032890  278144 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:33:24.062431  278144 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:33:24.062456  278144 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:33:24.062465  278144 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1212 00:33:24.062645  278144 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-858659 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:33:24.062725  278144 ssh_runner.go:195] Run: crio config
	I1212 00:33:24.109010  278144 cni.go:84] Creating CNI manager for ""
	I1212 00:33:24.109037  278144 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:33:24.109054  278144 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:33:24.109075  278144 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-858659 NodeName:embed-certs-858659 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:33:24.109202  278144 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-858659"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:33:24.109260  278144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:33:24.117514  278144 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:33:24.117569  278144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:33:24.125056  278144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1212 00:33:24.137518  278144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:33:24.151599  278144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1212 00:33:24.163535  278144 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:33:24.166948  278144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:33:24.176282  278144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:24.267072  278144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:24.289673  278144 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659 for IP: 192.168.94.2
	I1212 00:33:24.289695  278144 certs.go:195] generating shared ca certs ...
	I1212 00:33:24.289716  278144 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.289894  278144 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:33:24.289977  278144 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:33:24.289996  278144 certs.go:257] generating profile certs ...
	I1212 00:33:24.290069  278144 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.key
	I1212 00:33:24.290095  278144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.crt with IP's: []
	I1212 00:33:24.425621  278144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.crt ...
	I1212 00:33:24.425651  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.crt: {Name:mk45b9fc7c32e03cd8b8b253cee0beecc89168ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.425825  278144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.key ...
	I1212 00:33:24.425841  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.key: {Name:mk7969034f40478ebc3fcd8da2e89e524ba77096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.425960  278144 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key.89584afc
	I1212 00:33:24.425981  278144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt.89584afc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1212 00:33:24.627246  278144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt.89584afc ...
	I1212 00:33:24.627271  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt.89584afc: {Name:mkf363b5d4278a387e18f286b3c76b364b923111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.627425  278144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key.89584afc ...
	I1212 00:33:24.627438  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key.89584afc: {Name:mkb8910c32db51006465f917ed06964af4a9674d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.627524  278144 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt.89584afc -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt
	I1212 00:33:24.627596  278144 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key.89584afc -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key
	I1212 00:33:24.627649  278144 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key
	I1212 00:33:24.627670  278144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.crt with IP's: []
	I1212 00:33:24.683493  278144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.crt ...
	I1212 00:33:24.683515  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.crt: {Name:mk4c83643d0e89a51dc996cf2dabd1ed6bdbf2e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.683642  278144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key ...
	I1212 00:33:24.683664  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key: {Name:mkf320b4e9041cf5c42937a4ade7d266ee3cce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.683853  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:33:24.683891  278144 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:33:24.683898  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:33:24.683920  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:33:24.683943  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:33:24.683966  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:33:24.684004  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:33:24.684593  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:33:24.702599  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:33:24.719710  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:33:25.192170  272590 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:33:25.192235  272590 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:33:25.192361  272590 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:33:25.192432  272590 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 00:33:25.192519  272590 kubeadm.go:319] OS: Linux
	I1212 00:33:25.192568  272590 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:33:25.192609  272590 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:33:25.192664  272590 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:33:25.192706  272590 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:33:25.192747  272590 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:33:25.192786  272590 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:33:25.192831  272590 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:33:25.192889  272590 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 00:33:25.192982  272590 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:33:25.193125  272590 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:33:25.193255  272590 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:33:25.193337  272590 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:33:25.194568  272590 out.go:252]   - Generating certificates and keys ...
	I1212 00:33:25.194653  272590 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:33:25.194746  272590 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:33:25.194846  272590 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:33:25.194937  272590 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:33:25.195025  272590 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:33:25.195094  272590 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:33:25.195179  272590 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:33:25.195303  272590 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-675290] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 00:33:25.195397  272590 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:33:25.195573  272590 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-675290] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 00:33:25.195669  272590 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:33:25.195763  272590 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:33:25.195833  272590 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:33:25.195930  272590 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:33:25.196016  272590 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:33:25.196099  272590 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:33:25.196177  272590 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:33:25.196289  272590 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:33:25.196385  272590 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:33:25.196532  272590 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:33:25.196627  272590 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:33:25.197638  272590 out.go:252]   - Booting up control plane ...
	I1212 00:33:25.197759  272590 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:33:25.197871  272590 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:33:25.197963  272590 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:33:25.198118  272590 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:33:25.198245  272590 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:33:25.198333  272590 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:33:25.198415  272590 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:33:25.198461  272590 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:33:25.198654  272590 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:33:25.198773  272590 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:33:25.198875  272590 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001760288s
	I1212 00:33:25.199015  272590 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:33:25.199122  272590 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1212 00:33:25.199238  272590 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:33:25.199343  272590 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:33:25.199415  272590 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 505.336113ms
	I1212 00:33:25.199497  272590 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.771564525s
	I1212 00:33:25.199552  272590 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501309707s
	I1212 00:33:25.199690  272590 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:33:25.199852  272590 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:33:25.199905  272590 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:33:25.200119  272590 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-675290 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:33:25.200203  272590 kubeadm.go:319] [bootstrap-token] Using token: 6k5hlm.3rv4y6xn4tgjibyr
	I1212 00:33:25.201870  272590 out.go:252]   - Configuring RBAC rules ...
	I1212 00:33:25.201974  272590 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:33:25.202073  272590 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:33:25.202233  272590 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:33:25.202380  272590 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:33:25.202516  272590 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:33:25.202618  272590 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:33:25.202732  272590 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:33:25.202779  272590 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 00:33:25.202821  272590 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 00:33:25.202826  272590 kubeadm.go:319] 
	I1212 00:33:25.202910  272590 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 00:33:25.202924  272590 kubeadm.go:319] 
	I1212 00:33:25.203016  272590 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 00:33:25.203024  272590 kubeadm.go:319] 
	I1212 00:33:25.203058  272590 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 00:33:25.203145  272590 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:33:25.203208  272590 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:33:25.203217  272590 kubeadm.go:319] 
	I1212 00:33:25.203293  272590 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 00:33:25.203301  272590 kubeadm.go:319] 
	I1212 00:33:25.203364  272590 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:33:25.203385  272590 kubeadm.go:319] 
	I1212 00:33:25.203462  272590 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 00:33:25.203594  272590 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:33:25.203668  272590 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:33:25.203679  272590 kubeadm.go:319] 
	I1212 00:33:25.203748  272590 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:33:25.203818  272590 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 00:33:25.203825  272590 kubeadm.go:319] 
	I1212 00:33:25.203893  272590 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6k5hlm.3rv4y6xn4tgjibyr \
	I1212 00:33:25.203984  272590 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f \
	I1212 00:33:25.204004  272590 kubeadm.go:319] 	--control-plane 
	I1212 00:33:25.204007  272590 kubeadm.go:319] 
	I1212 00:33:25.204131  272590 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:33:25.204141  272590 kubeadm.go:319] 
	I1212 00:33:25.204227  272590 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6k5hlm.3rv4y6xn4tgjibyr \
	I1212 00:33:25.204320  272590 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f 
	I1212 00:33:25.204332  272590 cni.go:84] Creating CNI manager for ""
	I1212 00:33:25.204338  272590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:33:25.205594  272590 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 00:33:23.655263  270803 addons.go:530] duration metric: took 639.880413ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 00:33:23.955245  270803 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-743506" context rescaled to 1 replicas
	W1212 00:33:25.457032  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	I1212 00:33:22.290852  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:22.291316  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:22.791613  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:22.791924  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:23.291830  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:23.292241  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:23.791654  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:23.793187  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:24.291592  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:24.291924  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:24.791612  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:24.792009  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:25.291649  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:25.292046  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:25.791751  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:25.792178  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:26.291614  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:26.291985  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:26.791624  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:26.792019  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:25.206570  272590 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:33:25.210928  272590 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1212 00:33:25.210950  272590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 00:33:25.224287  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:33:25.457167  272590 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:33:25.457324  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:25.457418  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-675290 minikube.k8s.io/updated_at=2025_12_12T00_33_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=no-preload-675290 minikube.k8s.io/primary=true
	I1212 00:33:25.476212  272590 ops.go:34] apiserver oom_adj: -16
	I1212 00:33:25.564286  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:26.065329  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:26.564328  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:27.064508  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:24.736576  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:33:24.755063  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 00:33:24.774545  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:33:24.793295  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:33:24.812705  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:33:24.830500  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:33:24.850525  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:33:24.868692  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:33:24.886268  278144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:33:24.897891  278144 ssh_runner.go:195] Run: openssl version
	I1212 00:33:24.904042  278144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:33:24.911176  278144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:33:24.918098  278144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:33:24.921629  278144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:33:24.921676  278144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:33:24.959657  278144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:33:24.967294  278144 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:33:24.975343  278144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:24.982273  278144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:33:24.990092  278144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:24.993779  278144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:24.993827  278144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:25.031998  278144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:33:25.039700  278144 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:33:25.048043  278144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:33:25.056012  278144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:33:25.064509  278144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:33:25.069175  278144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:33:25.069228  278144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:33:25.108584  278144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:33:25.117805  278144 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:33:25.124959  278144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:33:25.128496  278144 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:33:25.128549  278144 kubeadm.go:401] StartCluster: {Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:33:25.128616  278144 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:33:25.128656  278144 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:33:25.152735  278144 cri.go:89] found id: ""
	I1212 00:33:25.152791  278144 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:33:25.160002  278144 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:33:25.168186  278144 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:33:25.168235  278144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:33:25.175527  278144 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:33:25.175541  278144 kubeadm.go:158] found existing configuration files:
	
	I1212 00:33:25.175574  278144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:33:25.182726  278144 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:33:25.182769  278144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:33:25.190663  278144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:33:25.198827  278144 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:33:25.198870  278144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:33:25.206417  278144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:33:25.214331  278144 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:33:25.214380  278144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:33:25.221328  278144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:33:25.228771  278144 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:33:25.228812  278144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:33:25.236619  278144 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:33:25.298265  278144 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 00:33:25.365354  278144 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:33:27.564979  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:28.064794  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:28.564602  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:29.064565  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:29.565218  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:30.064844  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:30.133149  272590 kubeadm.go:1114] duration metric: took 4.675879902s to wait for elevateKubeSystemPrivileges
	I1212 00:33:30.133195  272590 kubeadm.go:403] duration metric: took 12.489653684s to StartCluster
	I1212 00:33:30.133220  272590 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:30.133290  272590 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:33:30.134407  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:30.145606  272590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:33:30.145618  272590 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:33:30.145650  272590 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:33:30.145741  272590 addons.go:70] Setting storage-provisioner=true in profile "no-preload-675290"
	I1212 00:33:30.145762  272590 addons.go:239] Setting addon storage-provisioner=true in "no-preload-675290"
	I1212 00:33:30.145799  272590 host.go:66] Checking if "no-preload-675290" exists ...
	I1212 00:33:30.145837  272590 config.go:182] Loaded profile config "no-preload-675290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:33:30.145762  272590 addons.go:70] Setting default-storageclass=true in profile "no-preload-675290"
	I1212 00:33:30.145893  272590 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-675290"
	I1212 00:33:30.146227  272590 cli_runner.go:164] Run: docker container inspect no-preload-675290 --format={{.State.Status}}
	I1212 00:33:30.146407  272590 cli_runner.go:164] Run: docker container inspect no-preload-675290 --format={{.State.Status}}
	I1212 00:33:30.208678  272590 addons.go:239] Setting addon default-storageclass=true in "no-preload-675290"
	I1212 00:33:30.208723  272590 host.go:66] Checking if "no-preload-675290" exists ...
	I1212 00:33:30.209146  272590 cli_runner.go:164] Run: docker container inspect no-preload-675290 --format={{.State.Status}}
	I1212 00:33:30.209555  272590 out.go:179] * Verifying Kubernetes components...
	I1212 00:33:30.228632  272590 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:30.229410  272590 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:33:30.229504  272590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-675290
	I1212 00:33:30.230540  272590 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:33:30.231654  272590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:30.232880  272590 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:30.232898  272590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:33:30.232958  272590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-675290
	I1212 00:33:30.241902  272590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:33:30.255977  272590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/no-preload-675290/id_rsa Username:docker}
	I1212 00:33:30.260883  272590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/no-preload-675290/id_rsa Username:docker}
	I1212 00:33:30.396215  272590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:30.411003  272590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:30.418969  272590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:30.499202  272590 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1212 00:33:30.732276  272590 node_ready.go:35] waiting up to 6m0s for node "no-preload-675290" to be "Ready" ...
	I1212 00:33:30.733577  272590 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1212 00:33:27.955456  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	W1212 00:33:30.457818  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	I1212 00:33:27.291880  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:27.292353  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:27.790970  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:27.791329  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:28.290880  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:28.291231  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:28.790864  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:28.791237  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:29.290866  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:29.291244  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:29.790872  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:29.791234  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:30.291461  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:30.291886  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:30.791224  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:30.734544  272590 addons.go:530] duration metric: took 588.890424ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1212 00:33:31.004952  272590 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-675290" context rescaled to 1 replicas
	W1212 00:33:32.955094  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	W1212 00:33:34.955638  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	I1212 00:33:35.791724  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 00:33:35.791770  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:37.194116  278144 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 00:33:37.194203  278144 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:33:37.194314  278144 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:33:37.194389  278144 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 00:33:37.194458  278144 kubeadm.go:319] OS: Linux
	I1212 00:33:37.194544  278144 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:33:37.194613  278144 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:33:37.194687  278144 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:33:37.194756  278144 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:33:37.194825  278144 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:33:37.194905  278144 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:33:37.194979  278144 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:33:37.195045  278144 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 00:33:37.195123  278144 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:33:37.195248  278144 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:33:37.195386  278144 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:33:37.195465  278144 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:33:37.196692  278144 out.go:252]   - Generating certificates and keys ...
	I1212 00:33:37.196747  278144 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:33:37.196815  278144 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:33:37.196870  278144 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:33:37.196966  278144 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:33:37.197058  278144 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:33:37.197125  278144 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:33:37.197200  278144 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:33:37.197369  278144 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-858659 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 00:33:37.197413  278144 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:33:37.197552  278144 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-858659 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 00:33:37.197607  278144 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:33:37.197660  278144 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:33:37.197696  278144 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:33:37.197749  278144 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:33:37.197790  278144 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:33:37.197879  278144 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:33:37.197966  278144 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:33:37.198076  278144 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:33:37.198145  278144 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:33:37.198253  278144 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:33:37.198355  278144 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:33:37.199406  278144 out.go:252]   - Booting up control plane ...
	I1212 00:33:37.199517  278144 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:33:37.199604  278144 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:33:37.199697  278144 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:33:37.199790  278144 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:33:37.199864  278144 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:33:37.199961  278144 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:33:37.200052  278144 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:33:37.200115  278144 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:33:37.200307  278144 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:33:37.200411  278144 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:33:37.200521  278144 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001833759s
	I1212 00:33:37.200639  278144 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:33:37.200752  278144 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1212 00:33:37.200863  278144 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:33:37.200962  278144 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:33:37.201076  278144 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.434651471s
	I1212 00:33:37.201168  278144 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.026988685s
	I1212 00:33:37.201264  278144 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00162745s
	I1212 00:33:37.201389  278144 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:33:37.201534  278144 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:33:37.201603  278144 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:33:37.201804  278144 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-858659 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:33:37.201887  278144 kubeadm.go:319] [bootstrap-token] Using token: ggt060.eefg72qnn6nqw2lf
	I1212 00:33:37.203122  278144 out.go:252]   - Configuring RBAC rules ...
	I1212 00:33:37.203215  278144 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:33:37.203290  278144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:33:37.203414  278144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:33:37.203586  278144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:33:37.203694  278144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:33:37.203763  278144 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:33:37.203865  278144 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:33:37.203907  278144 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 00:33:37.203946  278144 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 00:33:37.203952  278144 kubeadm.go:319] 
	I1212 00:33:37.204010  278144 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 00:33:37.204016  278144 kubeadm.go:319] 
	I1212 00:33:37.204085  278144 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 00:33:37.204091  278144 kubeadm.go:319] 
	I1212 00:33:37.204114  278144 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 00:33:37.204164  278144 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:33:37.204212  278144 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:33:37.204218  278144 kubeadm.go:319] 
	I1212 00:33:37.204259  278144 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 00:33:37.204264  278144 kubeadm.go:319] 
	I1212 00:33:37.204300  278144 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:33:37.204305  278144 kubeadm.go:319] 
	I1212 00:33:37.204349  278144 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 00:33:37.204441  278144 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:33:37.204532  278144 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:33:37.204545  278144 kubeadm.go:319] 
	I1212 00:33:37.204614  278144 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:33:37.204686  278144 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 00:33:37.204691  278144 kubeadm.go:319] 
	I1212 00:33:37.204771  278144 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ggt060.eefg72qnn6nqw2lf \
	I1212 00:33:37.204864  278144 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f \
	I1212 00:33:37.204883  278144 kubeadm.go:319] 	--control-plane 
	I1212 00:33:37.204886  278144 kubeadm.go:319] 
	I1212 00:33:37.204951  278144 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:33:37.204957  278144 kubeadm.go:319] 
	I1212 00:33:37.205046  278144 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ggt060.eefg72qnn6nqw2lf \
	I1212 00:33:37.205184  278144 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f 
	I1212 00:33:37.205197  278144 cni.go:84] Creating CNI manager for ""
	I1212 00:33:37.205206  278144 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:33:37.206375  278144 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1212 00:33:32.734941  272590 node_ready.go:57] node "no-preload-675290" has "Ready":"False" status (will retry)
	W1212 00:33:34.735837  272590 node_ready.go:57] node "no-preload-675290" has "Ready":"False" status (will retry)
	W1212 00:33:37.235035  272590 node_ready.go:57] node "no-preload-675290" has "Ready":"False" status (will retry)
	I1212 00:33:37.207280  278144 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:33:37.211289  278144 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 00:33:37.211304  278144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 00:33:37.224403  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:33:37.418751  278144 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:33:37.418835  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:37.418886  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-858659 minikube.k8s.io/updated_at=2025_12_12T00_33_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=embed-certs-858659 minikube.k8s.io/primary=true
	I1212 00:33:37.430124  278144 ops.go:34] apiserver oom_adj: -16
	I1212 00:33:37.494810  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:37.995607  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:38.495663  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:38.995202  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:39.495603  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1212 00:33:36.956141  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	I1212 00:33:38.455307  270803 node_ready.go:49] node "old-k8s-version-743506" is "Ready"
	I1212 00:33:38.455334  270803 node_ready.go:38] duration metric: took 15.002846535s for node "old-k8s-version-743506" to be "Ready" ...
	I1212 00:33:38.455348  270803 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:33:38.455398  270803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:33:38.467187  270803 api_server.go:72] duration metric: took 15.451831949s to wait for apiserver process to appear ...
	I1212 00:33:38.467214  270803 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:33:38.467240  270803 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:33:38.471174  270803 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1212 00:33:38.474455  270803 api_server.go:141] control plane version: v1.28.0
	I1212 00:33:38.474507  270803 api_server.go:131] duration metric: took 7.285296ms to wait for apiserver health ...
	I1212 00:33:38.474518  270803 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:33:38.478117  270803 system_pods.go:59] 8 kube-system pods found
	I1212 00:33:38.478153  270803 system_pods.go:61] "coredns-5dd5756b68-nxwdc" [e73711a2-208b-41a6-a47f-6253638cfdf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:38.478162  270803 system_pods.go:61] "etcd-old-k8s-version-743506" [945b4dc3-f44c-48ec-8a03-ec0d012b5e09] Running
	I1212 00:33:38.478171  270803 system_pods.go:61] "kindnet-s2gvw" [21d0881b-1da3-4a1d-967d-8f108d5d8a1f] Running
	I1212 00:33:38.478177  270803 system_pods.go:61] "kube-apiserver-old-k8s-version-743506" [13ddd011-c962-4b44-8b26-b80bf8df1e13] Running
	I1212 00:33:38.478183  270803 system_pods.go:61] "kube-controller-manager-old-k8s-version-743506" [3df133c8-f899-4f35-b4bf-d1849a7262e7] Running
	I1212 00:33:38.478189  270803 system_pods.go:61] "kube-proxy-pz8kt" [671c52f4-19ce-4b7b-8e97-e43f64cd4aeb] Running
	I1212 00:33:38.478195  270803 system_pods.go:61] "kube-scheduler-old-k8s-version-743506" [3e6b9a7d-5346-47dd-af74-f9f8b6904163] Running
	I1212 00:33:38.478204  270803 system_pods.go:61] "storage-provisioner" [ccc4d4a0-b9c6-4653-90dc-113128acc782] Running
	I1212 00:33:38.478211  270803 system_pods.go:74] duration metric: took 3.685843ms to wait for pod list to return data ...
	I1212 00:33:38.478222  270803 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:33:38.480216  270803 default_sa.go:45] found service account: "default"
	I1212 00:33:38.480238  270803 default_sa.go:55] duration metric: took 2.008735ms for default service account to be created ...
	I1212 00:33:38.480247  270803 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:33:38.483031  270803 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:38.483061  270803 system_pods.go:89] "coredns-5dd5756b68-nxwdc" [e73711a2-208b-41a6-a47f-6253638cfdf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:38.483070  270803 system_pods.go:89] "etcd-old-k8s-version-743506" [945b4dc3-f44c-48ec-8a03-ec0d012b5e09] Running
	I1212 00:33:38.483077  270803 system_pods.go:89] "kindnet-s2gvw" [21d0881b-1da3-4a1d-967d-8f108d5d8a1f] Running
	I1212 00:33:38.483087  270803 system_pods.go:89] "kube-apiserver-old-k8s-version-743506" [13ddd011-c962-4b44-8b26-b80bf8df1e13] Running
	I1212 00:33:38.483097  270803 system_pods.go:89] "kube-controller-manager-old-k8s-version-743506" [3df133c8-f899-4f35-b4bf-d1849a7262e7] Running
	I1212 00:33:38.483104  270803 system_pods.go:89] "kube-proxy-pz8kt" [671c52f4-19ce-4b7b-8e97-e43f64cd4aeb] Running
	I1212 00:33:38.483109  270803 system_pods.go:89] "kube-scheduler-old-k8s-version-743506" [3e6b9a7d-5346-47dd-af74-f9f8b6904163] Running
	I1212 00:33:38.483118  270803 system_pods.go:89] "storage-provisioner" [ccc4d4a0-b9c6-4653-90dc-113128acc782] Running
	I1212 00:33:38.483126  270803 system_pods.go:126] duration metric: took 2.872475ms to wait for k8s-apps to be running ...
	I1212 00:33:38.483137  270803 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:33:38.483182  270803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:33:38.496061  270803 system_svc.go:56] duration metric: took 12.919771ms WaitForService to wait for kubelet
	I1212 00:33:38.496082  270803 kubeadm.go:587] duration metric: took 15.480732762s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:33:38.496102  270803 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:33:38.497977  270803 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:33:38.497995  270803 node_conditions.go:123] node cpu capacity is 8
	I1212 00:33:38.498010  270803 node_conditions.go:105] duration metric: took 1.903088ms to run NodePressure ...
	I1212 00:33:38.498022  270803 start.go:242] waiting for startup goroutines ...
	I1212 00:33:38.498030  270803 start.go:247] waiting for cluster config update ...
	I1212 00:33:38.498043  270803 start.go:256] writing updated cluster config ...
	I1212 00:33:38.498328  270803 ssh_runner.go:195] Run: rm -f paused
	I1212 00:33:38.501782  270803 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:33:38.505265  270803 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-nxwdc" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.510596  270803 pod_ready.go:94] pod "coredns-5dd5756b68-nxwdc" is "Ready"
	I1212 00:33:39.510619  270803 pod_ready.go:86] duration metric: took 1.005335277s for pod "coredns-5dd5756b68-nxwdc" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.513364  270803 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.517516  270803 pod_ready.go:94] pod "etcd-old-k8s-version-743506" is "Ready"
	I1212 00:33:39.517534  270803 pod_ready.go:86] duration metric: took 4.146163ms for pod "etcd-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.520326  270803 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.524429  270803 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-743506" is "Ready"
	I1212 00:33:39.524446  270803 pod_ready.go:86] duration metric: took 4.103471ms for pod "kube-apiserver-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.527203  270803 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.709024  270803 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-743506" is "Ready"
	I1212 00:33:39.709046  270803 pod_ready.go:86] duration metric: took 181.825306ms for pod "kube-controller-manager-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.909520  270803 pod_ready.go:83] waiting for pod "kube-proxy-pz8kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:40.308323  270803 pod_ready.go:94] pod "kube-proxy-pz8kt" is "Ready"
	I1212 00:33:40.308348  270803 pod_ready.go:86] duration metric: took 398.805252ms for pod "kube-proxy-pz8kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:40.509941  270803 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:40.908891  270803 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-743506" is "Ready"
	I1212 00:33:40.908911  270803 pod_ready.go:86] duration metric: took 398.947384ms for pod "kube-scheduler-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:40.908922  270803 pod_ready.go:40] duration metric: took 2.407114173s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:33:40.958106  270803 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1212 00:33:40.959982  270803 out.go:203] 
	W1212 00:33:40.961226  270803 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1212 00:33:40.962401  270803 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1212 00:33:40.963860  270803 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-743506" cluster and "default" namespace by default
	I1212 00:33:40.795082  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 00:33:40.795136  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:33:40.795187  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:33:40.820906  263844 cri.go:89] found id: "e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:33:40.820925  263844 cri.go:89] found id: "e5dcbdf2edc04b11a39a16ad0e5bded31475ff0b3d269fa7ad2a0b58adf37254"
	I1212 00:33:40.820929  263844 cri.go:89] found id: ""
	I1212 00:33:40.820936  263844 logs.go:282] 2 containers: [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106 e5dcbdf2edc04b11a39a16ad0e5bded31475ff0b3d269fa7ad2a0b58adf37254]
	I1212 00:33:40.820987  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:40.824897  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:40.828680  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:33:40.828744  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:33:40.854459  263844 cri.go:89] found id: ""
	I1212 00:33:40.854509  263844 logs.go:282] 0 containers: []
	W1212 00:33:40.854518  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:33:40.854526  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:33:40.854579  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:33:40.881533  263844 cri.go:89] found id: ""
	I1212 00:33:40.881555  263844 logs.go:282] 0 containers: []
	W1212 00:33:40.881564  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:33:40.881572  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:33:40.881630  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:33:40.906345  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:33:40.906365  263844 cri.go:89] found id: ""
	I1212 00:33:40.906374  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:33:40.906435  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:40.910519  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:33:40.910577  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:33:40.939434  263844 cri.go:89] found id: ""
	I1212 00:33:40.939463  263844 logs.go:282] 0 containers: []
	W1212 00:33:40.939499  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:33:40.939509  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:33:40.939555  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:33:40.966792  263844 cri.go:89] found id: "b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0"
	I1212 00:33:40.966812  263844 cri.go:89] found id: "962f34c2233cccd7ae7eb721e09c901ae4656fc23b8aa6f3a2f3688152708c6e"
	I1212 00:33:40.966818  263844 cri.go:89] found id: ""
	I1212 00:33:40.966826  263844 logs.go:282] 2 containers: [b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0 962f34c2233cccd7ae7eb721e09c901ae4656fc23b8aa6f3a2f3688152708c6e]
	I1212 00:33:40.966878  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:40.970829  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:40.974717  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:33:40.974776  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:33:41.005958  263844 cri.go:89] found id: ""
	I1212 00:33:41.005985  263844 logs.go:282] 0 containers: []
	W1212 00:33:41.005996  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:33:41.006006  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:33:41.006056  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:33:41.039359  263844 cri.go:89] found id: ""
	I1212 00:33:41.039388  263844 logs.go:282] 0 containers: []
	W1212 00:33:41.039399  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:33:41.039418  263844 logs.go:123] Gathering logs for kube-controller-manager [962f34c2233cccd7ae7eb721e09c901ae4656fc23b8aa6f3a2f3688152708c6e] ...
	I1212 00:33:41.039433  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 962f34c2233cccd7ae7eb721e09c901ae4656fc23b8aa6f3a2f3688152708c6e"
	I1212 00:33:41.071692  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:33:41.071722  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:33:41.114665  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:33:41.114701  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:33:41.158370  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:33:41.158395  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:33:41.172978  263844 logs.go:123] Gathering logs for kube-apiserver [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106] ...
	I1212 00:33:41.173006  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:33:41.203032  263844 logs.go:123] Gathering logs for kube-apiserver [e5dcbdf2edc04b11a39a16ad0e5bded31475ff0b3d269fa7ad2a0b58adf37254] ...
	I1212 00:33:41.203057  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5dcbdf2edc04b11a39a16ad0e5bded31475ff0b3d269fa7ad2a0b58adf37254"
	I1212 00:33:41.234616  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:33:41.234651  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:33:41.261883  263844 logs.go:123] Gathering logs for kube-controller-manager [b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0] ...
	I1212 00:33:41.261908  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0"
	I1212 00:33:41.288586  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:33:41.288610  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:33:41.345462  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:33:41.345501  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:33:39.235689  272590 node_ready.go:57] node "no-preload-675290" has "Ready":"False" status (will retry)
	W1212 00:33:41.735356  272590 node_ready.go:57] node "no-preload-675290" has "Ready":"False" status (will retry)
	I1212 00:33:39.995687  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:40.495092  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:40.995851  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:41.495552  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:41.995217  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:42.495439  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:42.557808  278144 kubeadm.go:1114] duration metric: took 5.139031528s to wait for elevateKubeSystemPrivileges
	I1212 00:33:42.557850  278144 kubeadm.go:403] duration metric: took 17.429303229s to StartCluster
	I1212 00:33:42.557872  278144 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:42.557936  278144 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:33:42.559776  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:42.560013  278144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:33:42.560028  278144 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:33:42.560006  278144 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:33:42.560108  278144 addons.go:70] Setting default-storageclass=true in profile "embed-certs-858659"
	I1212 00:33:42.560154  278144 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-858659"
	I1212 00:33:42.560226  278144 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:33:42.560101  278144 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-858659"
	I1212 00:33:42.560305  278144 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-858659"
	I1212 00:33:42.560343  278144 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:33:42.560558  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:42.560804  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:42.562345  278144 out.go:179] * Verifying Kubernetes components...
	I1212 00:33:42.563485  278144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:42.582693  278144 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:33:42.583659  278144 addons.go:239] Setting addon default-storageclass=true in "embed-certs-858659"
	I1212 00:33:42.583715  278144 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:33:42.584226  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:42.586890  278144 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:42.586913  278144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:33:42.586987  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:42.611417  278144 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:42.611444  278144 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:33:42.611665  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:42.616944  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:42.634991  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:42.644809  278144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:33:42.696105  278144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:42.731013  278144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:42.747779  278144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:42.816002  278144 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1212 00:33:42.817999  278144 node_ready.go:35] waiting up to 6m0s for node "embed-certs-858659" to be "Ready" ...
	I1212 00:33:43.019945  278144 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 00:33:43.021070  278144 addons.go:530] duration metric: took 461.030823ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 00:33:43.319944  278144 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-858659" context rescaled to 1 replicas
	I1212 00:33:43.734653  272590 node_ready.go:49] node "no-preload-675290" is "Ready"
	I1212 00:33:43.734705  272590 node_ready.go:38] duration metric: took 13.002376355s for node "no-preload-675290" to be "Ready" ...
	I1212 00:33:43.734724  272590 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:33:43.734797  272590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:33:43.750081  272590 api_server.go:72] duration metric: took 13.604432741s to wait for apiserver process to appear ...
	I1212 00:33:43.750104  272590 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:33:43.750123  272590 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:33:43.755396  272590 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 00:33:43.756073  272590 api_server.go:141] control plane version: v1.35.0-beta.0
	I1212 00:33:43.756093  272590 api_server.go:131] duration metric: took 5.983405ms to wait for apiserver health ...
	I1212 00:33:43.756101  272590 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:33:43.759052  272590 system_pods.go:59] 8 kube-system pods found
	I1212 00:33:43.759083  272590 system_pods.go:61] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:43.759090  272590 system_pods.go:61] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running
	I1212 00:33:43.759097  272590 system_pods.go:61] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:33:43.759103  272590 system_pods.go:61] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running
	I1212 00:33:43.759118  272590 system_pods.go:61] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running
	I1212 00:33:43.759126  272590 system_pods.go:61] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:33:43.759132  272590 system_pods.go:61] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running
	I1212 00:33:43.759150  272590 system_pods.go:61] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:43.759160  272590 system_pods.go:74] duration metric: took 3.053065ms to wait for pod list to return data ...
	I1212 00:33:43.759171  272590 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:33:43.761049  272590 default_sa.go:45] found service account: "default"
	I1212 00:33:43.761074  272590 default_sa.go:55] duration metric: took 1.895794ms for default service account to be created ...
	I1212 00:33:43.761081  272590 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:33:43.763348  272590 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:43.763384  272590 system_pods.go:89] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:43.763391  272590 system_pods.go:89] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running
	I1212 00:33:43.763399  272590 system_pods.go:89] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:33:43.763404  272590 system_pods.go:89] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running
	I1212 00:33:43.763419  272590 system_pods.go:89] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running
	I1212 00:33:43.763425  272590 system_pods.go:89] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:33:43.763430  272590 system_pods.go:89] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running
	I1212 00:33:43.763439  272590 system_pods.go:89] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:43.763507  272590 retry.go:31] will retry after 264.298758ms: missing components: kube-dns
	I1212 00:33:44.031211  272590 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:44.031246  272590 system_pods.go:89] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:44.031253  272590 system_pods.go:89] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running
	I1212 00:33:44.031262  272590 system_pods.go:89] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:33:44.031268  272590 system_pods.go:89] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running
	I1212 00:33:44.031276  272590 system_pods.go:89] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running
	I1212 00:33:44.031281  272590 system_pods.go:89] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:33:44.031286  272590 system_pods.go:89] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running
	I1212 00:33:44.031294  272590 system_pods.go:89] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:44.031311  272590 retry.go:31] will retry after 311.660302ms: missing components: kube-dns
	I1212 00:33:44.346179  272590 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:44.346210  272590 system_pods.go:89] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:44.346216  272590 system_pods.go:89] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running
	I1212 00:33:44.346220  272590 system_pods.go:89] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:33:44.346224  272590 system_pods.go:89] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running
	I1212 00:33:44.346229  272590 system_pods.go:89] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running
	I1212 00:33:44.346232  272590 system_pods.go:89] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:33:44.346235  272590 system_pods.go:89] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running
	I1212 00:33:44.346240  272590 system_pods.go:89] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:44.346254  272590 retry.go:31] will retry after 325.219552ms: missing components: kube-dns
	I1212 00:33:44.674796  272590 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:44.674821  272590 system_pods.go:89] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Running
	I1212 00:33:44.674827  272590 system_pods.go:89] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running
	I1212 00:33:44.674831  272590 system_pods.go:89] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:33:44.674834  272590 system_pods.go:89] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running
	I1212 00:33:44.674839  272590 system_pods.go:89] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running
	I1212 00:33:44.674842  272590 system_pods.go:89] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:33:44.674847  272590 system_pods.go:89] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running
	I1212 00:33:44.674850  272590 system_pods.go:89] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Running
	I1212 00:33:44.674858  272590 system_pods.go:126] duration metric: took 913.771066ms to wait for k8s-apps to be running ...
	I1212 00:33:44.674868  272590 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:33:44.674910  272590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:33:44.687374  272590 system_svc.go:56] duration metric: took 12.493284ms WaitForService to wait for kubelet
	I1212 00:33:44.687396  272590 kubeadm.go:587] duration metric: took 14.541752044s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:33:44.687415  272590 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:33:44.689839  272590 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:33:44.689859  272590 node_conditions.go:123] node cpu capacity is 8
	I1212 00:33:44.689872  272590 node_conditions.go:105] duration metric: took 2.452154ms to run NodePressure ...
	I1212 00:33:44.689883  272590 start.go:242] waiting for startup goroutines ...
	I1212 00:33:44.689889  272590 start.go:247] waiting for cluster config update ...
	I1212 00:33:44.689899  272590 start.go:256] writing updated cluster config ...
	I1212 00:33:44.690128  272590 ssh_runner.go:195] Run: rm -f paused
	I1212 00:33:44.693896  272590 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:33:44.696772  272590 pod_ready.go:83] waiting for pod "coredns-7d764666f9-44t4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.700545  272590 pod_ready.go:94] pod "coredns-7d764666f9-44t4m" is "Ready"
	I1212 00:33:44.700562  272590 pod_ready.go:86] duration metric: took 3.773553ms for pod "coredns-7d764666f9-44t4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.702235  272590 pod_ready.go:83] waiting for pod "etcd-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.705546  272590 pod_ready.go:94] pod "etcd-no-preload-675290" is "Ready"
	I1212 00:33:44.705562  272590 pod_ready.go:86] duration metric: took 3.311339ms for pod "etcd-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.707183  272590 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.710720  272590 pod_ready.go:94] pod "kube-apiserver-no-preload-675290" is "Ready"
	I1212 00:33:44.710738  272590 pod_ready.go:86] duration metric: took 3.539875ms for pod "kube-apiserver-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.712525  272590 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:45.097959  272590 pod_ready.go:94] pod "kube-controller-manager-no-preload-675290" is "Ready"
	I1212 00:33:45.097987  272590 pod_ready.go:86] duration metric: took 385.439817ms for pod "kube-controller-manager-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:45.297886  272590 pod_ready.go:83] waiting for pod "kube-proxy-7pxpp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:45.697387  272590 pod_ready.go:94] pod "kube-proxy-7pxpp" is "Ready"
	I1212 00:33:45.697415  272590 pod_ready.go:86] duration metric: took 399.505552ms for pod "kube-proxy-7pxpp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:45.898558  272590 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:46.298339  272590 pod_ready.go:94] pod "kube-scheduler-no-preload-675290" is "Ready"
	I1212 00:33:46.298363  272590 pod_ready.go:86] duration metric: took 399.784653ms for pod "kube-scheduler-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:46.298375  272590 pod_ready.go:40] duration metric: took 1.604453206s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:33:46.341140  272590 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1212 00:33:46.342768  272590 out.go:179] * Done! kubectl is now configured to use "no-preload-675290" cluster and "default" namespace by default
	W1212 00:33:44.820992  278144 node_ready.go:57] node "embed-certs-858659" has "Ready":"False" status (will retry)
	W1212 00:33:47.321019  278144 node_ready.go:57] node "embed-certs-858659" has "Ready":"False" status (will retry)
	I1212 00:33:51.580220  263844 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.234699713s)
	W1212 00:33:51.580261  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:43158->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:43158->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	W1212 00:33:49.821281  278144 node_ready.go:57] node "embed-certs-858659" has "Ready":"False" status (will retry)
	W1212 00:33:52.320588  278144 node_ready.go:57] node "embed-certs-858659" has "Ready":"False" status (will retry)
	I1212 00:33:53.820142  278144 node_ready.go:49] node "embed-certs-858659" is "Ready"
	I1212 00:33:53.820171  278144 node_ready.go:38] duration metric: took 11.00213947s for node "embed-certs-858659" to be "Ready" ...
	I1212 00:33:53.820184  278144 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:33:53.820228  278144 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:33:53.832052  278144 api_server.go:72] duration metric: took 11.271920587s to wait for apiserver process to appear ...
	I1212 00:33:53.832072  278144 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:33:53.832087  278144 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 00:33:53.835866  278144 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1212 00:33:53.836816  278144 api_server.go:141] control plane version: v1.34.2
	I1212 00:33:53.836852  278144 api_server.go:131] duration metric: took 4.772765ms to wait for apiserver health ...
	I1212 00:33:53.836863  278144 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:33:53.839734  278144 system_pods.go:59] 8 kube-system pods found
	I1212 00:33:53.839760  278144 system_pods.go:61] "coredns-66bc5c9577-8x66p" [1e3ac279-c897-4100-aa49-a94ed95d1b5a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:53.839765  278144 system_pods.go:61] "etcd-embed-certs-858659" [a6f85bfe-d87e-4cd1-a34a-6517b835d825] Running
	I1212 00:33:53.839772  278144 system_pods.go:61] "kindnet-9jvdg" [295eca47-46bb-43bf-981b-7320ba579410] Running
	I1212 00:33:53.839776  278144 system_pods.go:61] "kube-apiserver-embed-certs-858659" [b0ed7f59-ed08-4325-a71e-b6fc09d2f588] Running
	I1212 00:33:53.839780  278144 system_pods.go:61] "kube-controller-manager-embed-certs-858659" [c5be6475-d2da-40b2-97eb-4df8cd8a51db] Running
	I1212 00:33:53.839783  278144 system_pods.go:61] "kube-proxy-httpr" [d6220e54-3a3a-4fbe-94e1-d0117757204a] Running
	I1212 00:33:53.839786  278144 system_pods.go:61] "kube-scheduler-embed-certs-858659" [2c4596a5-b170-4c02-93ef-32fac96600c7] Running
	I1212 00:33:53.839800  278144 system_pods.go:61] "storage-provisioner" [1e3f7607-a2f4-4ca4-84c0-8cffb038ee03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:53.839809  278144 system_pods.go:74] duration metric: took 2.9403ms to wait for pod list to return data ...
	I1212 00:33:53.839816  278144 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:33:53.841654  278144 default_sa.go:45] found service account: "default"
	I1212 00:33:53.841674  278144 default_sa.go:55] duration metric: took 1.848813ms for default service account to be created ...
	I1212 00:33:53.841685  278144 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:33:53.847152  278144 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:53.847189  278144 system_pods.go:89] "coredns-66bc5c9577-8x66p" [1e3ac279-c897-4100-aa49-a94ed95d1b5a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:53.847196  278144 system_pods.go:89] "etcd-embed-certs-858659" [a6f85bfe-d87e-4cd1-a34a-6517b835d825] Running
	I1212 00:33:53.847205  278144 system_pods.go:89] "kindnet-9jvdg" [295eca47-46bb-43bf-981b-7320ba579410] Running
	I1212 00:33:53.847211  278144 system_pods.go:89] "kube-apiserver-embed-certs-858659" [b0ed7f59-ed08-4325-a71e-b6fc09d2f588] Running
	I1212 00:33:53.847217  278144 system_pods.go:89] "kube-controller-manager-embed-certs-858659" [c5be6475-d2da-40b2-97eb-4df8cd8a51db] Running
	I1212 00:33:53.847221  278144 system_pods.go:89] "kube-proxy-httpr" [d6220e54-3a3a-4fbe-94e1-d0117757204a] Running
	I1212 00:33:53.847226  278144 system_pods.go:89] "kube-scheduler-embed-certs-858659" [2c4596a5-b170-4c02-93ef-32fac96600c7] Running
	I1212 00:33:53.847233  278144 system_pods.go:89] "storage-provisioner" [1e3f7607-a2f4-4ca4-84c0-8cffb038ee03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:53.847255  278144 retry.go:31] will retry after 242.15138ms: missing components: kube-dns
	I1212 00:33:54.092517  278144 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:54.092553  278144 system_pods.go:89] "coredns-66bc5c9577-8x66p" [1e3ac279-c897-4100-aa49-a94ed95d1b5a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:54.092562  278144 system_pods.go:89] "etcd-embed-certs-858659" [a6f85bfe-d87e-4cd1-a34a-6517b835d825] Running
	I1212 00:33:54.092571  278144 system_pods.go:89] "kindnet-9jvdg" [295eca47-46bb-43bf-981b-7320ba579410] Running
	I1212 00:33:54.092577  278144 system_pods.go:89] "kube-apiserver-embed-certs-858659" [b0ed7f59-ed08-4325-a71e-b6fc09d2f588] Running
	I1212 00:33:54.092588  278144 system_pods.go:89] "kube-controller-manager-embed-certs-858659" [c5be6475-d2da-40b2-97eb-4df8cd8a51db] Running
	I1212 00:33:54.092597  278144 system_pods.go:89] "kube-proxy-httpr" [d6220e54-3a3a-4fbe-94e1-d0117757204a] Running
	I1212 00:33:54.092603  278144 system_pods.go:89] "kube-scheduler-embed-certs-858659" [2c4596a5-b170-4c02-93ef-32fac96600c7] Running
	I1212 00:33:54.092614  278144 system_pods.go:89] "storage-provisioner" [1e3f7607-a2f4-4ca4-84c0-8cffb038ee03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:54.092635  278144 retry.go:31] will retry after 275.013468ms: missing components: kube-dns
	I1212 00:33:54.371511  278144 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:54.371568  278144 system_pods.go:89] "coredns-66bc5c9577-8x66p" [1e3ac279-c897-4100-aa49-a94ed95d1b5a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:54.371583  278144 system_pods.go:89] "etcd-embed-certs-858659" [a6f85bfe-d87e-4cd1-a34a-6517b835d825] Running
	I1212 00:33:54.371592  278144 system_pods.go:89] "kindnet-9jvdg" [295eca47-46bb-43bf-981b-7320ba579410] Running
	I1212 00:33:54.371596  278144 system_pods.go:89] "kube-apiserver-embed-certs-858659" [b0ed7f59-ed08-4325-a71e-b6fc09d2f588] Running
	I1212 00:33:54.371600  278144 system_pods.go:89] "kube-controller-manager-embed-certs-858659" [c5be6475-d2da-40b2-97eb-4df8cd8a51db] Running
	I1212 00:33:54.371606  278144 system_pods.go:89] "kube-proxy-httpr" [d6220e54-3a3a-4fbe-94e1-d0117757204a] Running
	I1212 00:33:54.371610  278144 system_pods.go:89] "kube-scheduler-embed-certs-858659" [2c4596a5-b170-4c02-93ef-32fac96600c7] Running
	I1212 00:33:54.371615  278144 system_pods.go:89] "storage-provisioner" [1e3f7607-a2f4-4ca4-84c0-8cffb038ee03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:54.371631  278144 retry.go:31] will retry after 370.876841ms: missing components: kube-dns
	
	
	==> CRI-O <==
	Dec 12 00:33:43 no-preload-675290 crio[769]: time="2025-12-12T00:33:43.711056675Z" level=info msg="Starting container: 5bebba673eb776ba73d62a8bc78834e3eba6c429741e01f838c7c81fbe89b73a" id=391cfb85-dc92-451b-a929-6efbd633f522 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:33:43 no-preload-675290 crio[769]: time="2025-12-12T00:33:43.713090011Z" level=info msg="Started container" PID=2796 containerID=5bebba673eb776ba73d62a8bc78834e3eba6c429741e01f838c7c81fbe89b73a description=kube-system/coredns-7d764666f9-44t4m/coredns id=391cfb85-dc92-451b-a929-6efbd633f522 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8d75adcbf906390d817c46b2bd1313763a4ddfb8694a40222db3992c563429d
	Dec 12 00:33:46 no-preload-675290 crio[769]: time="2025-12-12T00:33:46.795236575Z" level=info msg="Running pod sandbox: default/busybox/POD" id=97b34bfd-9f61-4daf-88fc-e45e60581966 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:33:46 no-preload-675290 crio[769]: time="2025-12-12T00:33:46.795321783Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:33:46 no-preload-675290 crio[769]: time="2025-12-12T00:33:46.799853203Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d3d9635b67e7d1d90f46e378ef0131be583ee1b3847515f820952dfcfc40b092 UID:9ea911e6-9d84-479e-80ec-f198c0da93b7 NetNS:/var/run/netns/bdd60b31-aa1f-4cb8-9267-ef2b05005ebf Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00052a5f8}] Aliases:map[]}"
	Dec 12 00:33:46 no-preload-675290 crio[769]: time="2025-12-12T00:33:46.799880894Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 12 00:33:46 no-preload-675290 crio[769]: time="2025-12-12T00:33:46.809465665Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d3d9635b67e7d1d90f46e378ef0131be583ee1b3847515f820952dfcfc40b092 UID:9ea911e6-9d84-479e-80ec-f198c0da93b7 NetNS:/var/run/netns/bdd60b31-aa1f-4cb8-9267-ef2b05005ebf Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00052a5f8}] Aliases:map[]}"
	Dec 12 00:33:46 no-preload-675290 crio[769]: time="2025-12-12T00:33:46.809705384Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 12 00:33:46 no-preload-675290 crio[769]: time="2025-12-12T00:33:46.811115495Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 00:33:46 no-preload-675290 crio[769]: time="2025-12-12T00:33:46.812317926Z" level=info msg="Ran pod sandbox d3d9635b67e7d1d90f46e378ef0131be583ee1b3847515f820952dfcfc40b092 with infra container: default/busybox/POD" id=97b34bfd-9f61-4daf-88fc-e45e60581966 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:33:46 no-preload-675290 crio[769]: time="2025-12-12T00:33:46.813645378Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c746ab9f-fade-4be9-b05e-2bd2712d3e3b name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:33:46 no-preload-675290 crio[769]: time="2025-12-12T00:33:46.813745722Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c746ab9f-fade-4be9-b05e-2bd2712d3e3b name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:33:46 no-preload-675290 crio[769]: time="2025-12-12T00:33:46.813776157Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c746ab9f-fade-4be9-b05e-2bd2712d3e3b name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:33:46 no-preload-675290 crio[769]: time="2025-12-12T00:33:46.814524307Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9b550081-0483-424d-adbb-114d6d6b71c2 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:33:46 no-preload-675290 crio[769]: time="2025-12-12T00:33:46.815949247Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 12 00:33:47 no-preload-675290 crio[769]: time="2025-12-12T00:33:47.388794781Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=9b550081-0483-424d-adbb-114d6d6b71c2 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:33:47 no-preload-675290 crio[769]: time="2025-12-12T00:33:47.38933159Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=57e536ca-ea28-4402-a30a-b80b06d53802 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:33:47 no-preload-675290 crio[769]: time="2025-12-12T00:33:47.390768472Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bc3c2f06-6d86-4d0b-965d-7792662e7966 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:33:47 no-preload-675290 crio[769]: time="2025-12-12T00:33:47.393855247Z" level=info msg="Creating container: default/busybox/busybox" id=1c266297-bd8c-45f8-b395-130ebcc8f8f8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:33:47 no-preload-675290 crio[769]: time="2025-12-12T00:33:47.393963641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:33:47 no-preload-675290 crio[769]: time="2025-12-12T00:33:47.398327381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:33:47 no-preload-675290 crio[769]: time="2025-12-12T00:33:47.398787507Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:33:47 no-preload-675290 crio[769]: time="2025-12-12T00:33:47.4247712Z" level=info msg="Created container b0ea76be387b33ad3f4043e24408fc5099e3ce23d223e926e285eaae1a832fc1: default/busybox/busybox" id=1c266297-bd8c-45f8-b395-130ebcc8f8f8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:33:47 no-preload-675290 crio[769]: time="2025-12-12T00:33:47.425196131Z" level=info msg="Starting container: b0ea76be387b33ad3f4043e24408fc5099e3ce23d223e926e285eaae1a832fc1" id=0840c205-9301-438b-9450-8d9713d32ee4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:33:47 no-preload-675290 crio[769]: time="2025-12-12T00:33:47.426701136Z" level=info msg="Started container" PID=2874 containerID=b0ea76be387b33ad3f4043e24408fc5099e3ce23d223e926e285eaae1a832fc1 description=default/busybox/busybox id=0840c205-9301-438b-9450-8d9713d32ee4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d3d9635b67e7d1d90f46e378ef0131be583ee1b3847515f820952dfcfc40b092
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b0ea76be387b3       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   d3d9635b67e7d       busybox                                     default
	5bebba673eb77       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   f8d75adcbf906       coredns-7d764666f9-44t4m                    kube-system
	211d8f9cc0fc9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   18f73a1e1356b       storage-provisioner                         kube-system
	1d8e76143cd8f       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   c6bc948cf973f       kindnet-ng47n                               kube-system
	c328391a9298d       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      25 seconds ago      Running             kube-proxy                0                   caec628f8e67c       kube-proxy-7pxpp                            kube-system
	ede2eca9b92ce       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      35 seconds ago      Running             etcd                      0                   30cd4b665276e       etcd-no-preload-675290                      kube-system
	08baa3e33bd1d       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      35 seconds ago      Running             kube-scheduler            0                   0265ab0140324       kube-scheduler-no-preload-675290            kube-system
	819e230749bb8       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      35 seconds ago      Running             kube-apiserver            0                   953a9b988bc72       kube-apiserver-no-preload-675290            kube-system
	7d4689fbb5ae2       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      35 seconds ago      Running             kube-controller-manager   0                   924ebb7086bdc       kube-controller-manager-no-preload-675290   kube-system
	
	
	==> coredns [5bebba673eb776ba73d62a8bc78834e3eba6c429741e01f838c7c81fbe89b73a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:37823 - 50277 "HINFO IN 7204924707509095174.7462067626420841258. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.086764518s
	
	
	==> describe nodes <==
	Name:               no-preload-675290
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-675290
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=no-preload-675290
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_33_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:33:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-675290
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:33:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:33:54 +0000   Fri, 12 Dec 2025 00:33:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:33:54 +0000   Fri, 12 Dec 2025 00:33:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:33:54 +0000   Fri, 12 Dec 2025 00:33:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:33:54 +0000   Fri, 12 Dec 2025 00:33:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-675290
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                bb171fe3-47ef-405d-9d08-6137f609e70c
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-7d764666f9-44t4m                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-675290                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-ng47n                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-675290             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-675290    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-7pxpp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-675290             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node no-preload-675290 event: Registered Node no-preload-675290 in Controller
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [ede2eca9b92cef3f55863962e17dbd0712fee589065b26a7344c3c76996de12a] <==
	{"level":"warn","ts":"2025-12-12T00:33:21.373002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.382933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.390692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.398012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.404691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.412169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.420756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.428054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.436269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.445040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.454110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.466533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.477210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.486179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.494136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.503938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.512156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.519424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.526885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.535380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.546844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.553904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.561070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.568714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:21.615552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48474","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:33:55 up  1:16,  0 user,  load average: 2.95, 2.50, 1.68
	Linux no-preload-675290 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1d8e76143cd8f25bbf9b3272aabfe2c520858f1ddfabc4394663138b7bfdfaca] <==
	I1212 00:33:32.789590       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 00:33:32.789864       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1212 00:33:32.790005       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:33:32.790022       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:33:32.790045       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:33:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:33:32.992116       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:33:32.992181       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:33:32.992194       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:33:32.992366       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 00:33:33.492804       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:33:33.492827       1 metrics.go:72] Registering metrics
	I1212 00:33:33.492874       1 controller.go:711] "Syncing nftables rules"
	I1212 00:33:42.993113       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 00:33:42.993167       1 main.go:301] handling current node
	I1212 00:33:52.992580       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 00:33:52.992609       1 main.go:301] handling current node
	
	
	==> kube-apiserver [819e230749bb8d5a6fbe02b1348b3bd2f79148d455ba4b4c50aa822e8919fca1] <==
	I1212 00:33:22.090314       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1212 00:33:22.090761       1 controller.go:667] quota admission added evaluator for: namespaces
	E1212 00:33:22.094085       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1212 00:33:22.096225       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1212 00:33:22.100983       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:33:22.101460       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:33:22.131019       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1212 00:33:22.297802       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:33:22.994276       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1212 00:33:23.002400       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1212 00:33:23.002503       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1212 00:33:23.617889       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:33:23.655097       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:33:23.799235       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 00:33:23.804794       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1212 00:33:23.805886       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 00:33:23.811408       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:33:24.013897       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:33:24.591232       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:33:24.598611       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 00:33:24.606521       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 00:33:29.565740       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 00:33:29.867343       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:33:29.870469       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:33:30.014584       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [7d4689fbb5ae26af769819d2efaf4a237d358185d6cc62fcaef50481eb7380ac] <==
	I1212 00:33:28.820198       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.821445       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.821498       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.821563       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.821665       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.821720       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.821782       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.821819       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.821860       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.822002       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.822150       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.822086       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.822646       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.822657       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.822677       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.822665       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.822707       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.825239       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-675290" podCIDRs=["10.244.0.0/24"]
	I1212 00:33:28.828008       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:33:28.829448       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.920445       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:28.920464       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1212 00:33:28.920470       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1212 00:33:28.928405       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:43.818900       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [c328391a9298d0bc64a2f18ecd51629b5c22ca333adcd47d5ef86d6526ac4a5b] <==
	I1212 00:33:30.528561       1 server_linux.go:53] "Using iptables proxy"
	I1212 00:33:30.594206       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:33:30.694391       1 shared_informer.go:377] "Caches are synced"
	I1212 00:33:30.694442       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1212 00:33:30.694569       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:33:30.719382       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:33:30.719459       1 server_linux.go:136] "Using iptables Proxier"
	I1212 00:33:30.725079       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:33:30.725668       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1212 00:33:30.725921       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:33:30.730288       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:33:30.731022       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:33:30.730921       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:33:30.730925       1 config.go:309] "Starting node config controller"
	I1212 00:33:30.731266       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:33:30.731301       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 00:33:30.731034       1 config.go:200] "Starting service config controller"
	I1212 00:33:30.731356       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:33:30.731641       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:33:30.831314       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 00:33:30.831738       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 00:33:30.831785       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [08baa3e33bd1d40b74c5ae0e756c6e11d1e1154ac7925e69ba5f7d61e63ba207] <==
	E1212 00:33:23.061939       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:33:23.064299       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1212 00:33:23.092954       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:33:23.095155       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1212 00:33:23.144868       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1212 00:33:23.146020       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1212 00:33:23.157109       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1212 00:33:23.158211       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1212 00:33:23.170643       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1212 00:33:23.172490       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1212 00:33:23.231761       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:33:23.232885       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1212 00:33:23.257922       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1212 00:33:23.260715       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1212 00:33:23.281934       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1212 00:33:23.283126       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1212 00:33:23.286929       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1212 00:33:23.288263       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1212 00:33:23.302308       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1212 00:33:23.304425       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1212 00:33:23.403736       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1212 00:33:23.404748       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1212 00:33:23.423614       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1212 00:33:23.425826       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	I1212 00:33:25.846659       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 12 00:33:30 no-preload-675290 kubelet[2191]: I1212 00:33:30.049874    2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dldrc\" (UniqueName: \"kubernetes.io/projected/57d08f79-0e03-4148-9724-cac54cc3a437-kube-api-access-dldrc\") pod \"kube-proxy-7pxpp\" (UID: \"57d08f79-0e03-4148-9724-cac54cc3a437\") " pod="kube-system/kube-proxy-7pxpp"
	Dec 12 00:33:30 no-preload-675290 kubelet[2191]: I1212 00:33:30.049935    2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a3a49761-52d7-4b77-a861-af908cd83f4d-cni-cfg\") pod \"kindnet-ng47n\" (UID: \"a3a49761-52d7-4b77-a861-af908cd83f4d\") " pod="kube-system/kindnet-ng47n"
	Dec 12 00:33:30 no-preload-675290 kubelet[2191]: I1212 00:33:30.049969    2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3a49761-52d7-4b77-a861-af908cd83f4d-lib-modules\") pod \"kindnet-ng47n\" (UID: \"a3a49761-52d7-4b77-a861-af908cd83f4d\") " pod="kube-system/kindnet-ng47n"
	Dec 12 00:33:30 no-preload-675290 kubelet[2191]: I1212 00:33:30.049997    2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn745\" (UniqueName: \"kubernetes.io/projected/a3a49761-52d7-4b77-a861-af908cd83f4d-kube-api-access-kn745\") pod \"kindnet-ng47n\" (UID: \"a3a49761-52d7-4b77-a861-af908cd83f4d\") " pod="kube-system/kindnet-ng47n"
	Dec 12 00:33:31 no-preload-675290 kubelet[2191]: I1212 00:33:31.480137    2191 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-7pxpp" podStartSLOduration=1.4801166559999999 podStartE2EDuration="1.480116656s" podCreationTimestamp="2025-12-12 00:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:33:31.479703692 +0000 UTC m=+7.138111831" watchObservedRunningTime="2025-12-12 00:33:31.480116656 +0000 UTC m=+7.138524792"
	Dec 12 00:33:32 no-preload-675290 kubelet[2191]: E1212 00:33:32.102460    2191 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-675290" containerName="kube-apiserver"
	Dec 12 00:33:33 no-preload-675290 kubelet[2191]: E1212 00:33:33.159693    2191 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-675290" containerName="etcd"
	Dec 12 00:33:33 no-preload-675290 kubelet[2191]: E1212 00:33:33.692074    2191 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-675290" containerName="kube-controller-manager"
	Dec 12 00:33:33 no-preload-675290 kubelet[2191]: I1212 00:33:33.702843    2191 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-ng47n" podStartSLOduration=1.482708937 podStartE2EDuration="3.702824882s" podCreationTimestamp="2025-12-12 00:33:30 +0000 UTC" firstStartedPulling="2025-12-12 00:33:30.35010852 +0000 UTC m=+6.008516647" lastFinishedPulling="2025-12-12 00:33:32.570224465 +0000 UTC m=+8.228632592" observedRunningTime="2025-12-12 00:33:33.478588227 +0000 UTC m=+9.136996360" watchObservedRunningTime="2025-12-12 00:33:33.702824882 +0000 UTC m=+9.361233019"
	Dec 12 00:33:38 no-preload-675290 kubelet[2191]: E1212 00:33:38.834042    2191 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-675290" containerName="kube-scheduler"
	Dec 12 00:33:42 no-preload-675290 kubelet[2191]: E1212 00:33:42.107461    2191 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-675290" containerName="kube-apiserver"
	Dec 12 00:33:43 no-preload-675290 kubelet[2191]: E1212 00:33:43.160594    2191 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-675290" containerName="etcd"
	Dec 12 00:33:43 no-preload-675290 kubelet[2191]: I1212 00:33:43.339703    2191 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 12 00:33:43 no-preload-675290 kubelet[2191]: I1212 00:33:43.445231    2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj8gb\" (UniqueName: \"kubernetes.io/projected/c0391e3a-aabe-4074-a617-136990bd5fb4-kube-api-access-cj8gb\") pod \"storage-provisioner\" (UID: \"c0391e3a-aabe-4074-a617-136990bd5fb4\") " pod="kube-system/storage-provisioner"
	Dec 12 00:33:43 no-preload-675290 kubelet[2191]: I1212 00:33:43.445278    2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c0391e3a-aabe-4074-a617-136990bd5fb4-tmp\") pod \"storage-provisioner\" (UID: \"c0391e3a-aabe-4074-a617-136990bd5fb4\") " pod="kube-system/storage-provisioner"
	Dec 12 00:33:43 no-preload-675290 kubelet[2191]: I1212 00:33:43.445312    2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cceb1c43-32c8-4878-8afd-9cffbf61ad07-config-volume\") pod \"coredns-7d764666f9-44t4m\" (UID: \"cceb1c43-32c8-4878-8afd-9cffbf61ad07\") " pod="kube-system/coredns-7d764666f9-44t4m"
	Dec 12 00:33:43 no-preload-675290 kubelet[2191]: I1212 00:33:43.445363    2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjrvv\" (UniqueName: \"kubernetes.io/projected/cceb1c43-32c8-4878-8afd-9cffbf61ad07-kube-api-access-wjrvv\") pod \"coredns-7d764666f9-44t4m\" (UID: \"cceb1c43-32c8-4878-8afd-9cffbf61ad07\") " pod="kube-system/coredns-7d764666f9-44t4m"
	Dec 12 00:33:43 no-preload-675290 kubelet[2191]: E1212 00:33:43.697391    2191 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-675290" containerName="kube-controller-manager"
	Dec 12 00:33:44 no-preload-675290 kubelet[2191]: E1212 00:33:44.489596    2191 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-44t4m" containerName="coredns"
	Dec 12 00:33:44 no-preload-675290 kubelet[2191]: I1212 00:33:44.500200    2191 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-44t4m" podStartSLOduration=14.500183215 podStartE2EDuration="14.500183215s" podCreationTimestamp="2025-12-12 00:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:33:44.500154035 +0000 UTC m=+20.158562170" watchObservedRunningTime="2025-12-12 00:33:44.500183215 +0000 UTC m=+20.158591351"
	Dec 12 00:33:44 no-preload-675290 kubelet[2191]: I1212 00:33:44.508777    2191 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.50876181 podStartE2EDuration="14.50876181s" podCreationTimestamp="2025-12-12 00:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:33:44.508454052 +0000 UTC m=+20.166862201" watchObservedRunningTime="2025-12-12 00:33:44.50876181 +0000 UTC m=+20.167169950"
	Dec 12 00:33:45 no-preload-675290 kubelet[2191]: E1212 00:33:45.493077    2191 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-44t4m" containerName="coredns"
	Dec 12 00:33:46 no-preload-675290 kubelet[2191]: E1212 00:33:46.495358    2191 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-44t4m" containerName="coredns"
	Dec 12 00:33:46 no-preload-675290 kubelet[2191]: I1212 00:33:46.563616    2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr54h\" (UniqueName: \"kubernetes.io/projected/9ea911e6-9d84-479e-80ec-f198c0da93b7-kube-api-access-nr54h\") pod \"busybox\" (UID: \"9ea911e6-9d84-479e-80ec-f198c0da93b7\") " pod="default/busybox"
	Dec 12 00:33:47 no-preload-675290 kubelet[2191]: I1212 00:33:47.506928    2191 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.930986079 podStartE2EDuration="1.506913455s" podCreationTimestamp="2025-12-12 00:33:46 +0000 UTC" firstStartedPulling="2025-12-12 00:33:46.814136388 +0000 UTC m=+22.472544509" lastFinishedPulling="2025-12-12 00:33:47.390063771 +0000 UTC m=+23.048471885" observedRunningTime="2025-12-12 00:33:47.506633455 +0000 UTC m=+23.165041590" watchObservedRunningTime="2025-12-12 00:33:47.506913455 +0000 UTC m=+23.165321589"
	
	
	==> storage-provisioner [211d8f9cc0fc9b78c300d30de44eb2a5d5ea76ed11f116b9568d70df70d0e630] <==
	I1212 00:33:43.723743       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:33:43.731574       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:33:43.731627       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1212 00:33:43.733582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:43.738041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 00:33:43.738182       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:33:43.738265       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"61ba2fea-1ca4-4114-a78a-7ecfcddc11ae", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-675290_1d9c2ce1-f740-4a92-888f-0b4f302e2a2d became leader
	I1212 00:33:43.738338       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-675290_1d9c2ce1-f740-4a92-888f-0b4f302e2a2d!
	W1212 00:33:43.739889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:43.743202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 00:33:43.839075       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-675290_1d9c2ce1-f740-4a92-888f-0b4f302e2a2d!
	W1212 00:33:45.746635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:45.750431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:47.752842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:47.756438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:49.759221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:49.763048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:51.766158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:51.770831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:53.775034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:53.779155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:55.782928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:55.786656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-675290 -n no-preload-675290
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-675290 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-858659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-858659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (238.726409ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:34:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-858659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-858659 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-858659 describe deploy/metrics-server -n kube-system: exit status 1 (55.913793ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-858659 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-858659
helpers_test.go:244: (dbg) docker inspect embed-certs-858659:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705",
	        "Created": "2025-12-12T00:33:20.451187787Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 279220,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:33:20.488809804Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705/hostname",
	        "HostsPath": "/var/lib/docker/containers/feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705/hosts",
	        "LogPath": "/var/lib/docker/containers/feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705/feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705-json.log",
	        "Name": "/embed-certs-858659",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-858659:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-858659",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705",
	                "LowerDir": "/var/lib/docker/overlay2/7f6ddba1e54d34c41adb7c12d95ba48c15153bae18741ec1ec818d27555aa5e9-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f6ddba1e54d34c41adb7c12d95ba48c15153bae18741ec1ec818d27555aa5e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f6ddba1e54d34c41adb7c12d95ba48c15153bae18741ec1ec818d27555aa5e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f6ddba1e54d34c41adb7c12d95ba48c15153bae18741ec1ec818d27555aa5e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-858659",
	                "Source": "/var/lib/docker/volumes/embed-certs-858659/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-858659",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-858659",
	                "name.minikube.sigs.k8s.io": "embed-certs-858659",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "91cc29218bfdc85c6b0042d7a6cd7b7e3ddf2a537d9f93db31d1654d6524f8e8",
	            "SandboxKey": "/var/run/docker/netns/91cc29218bfd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-858659": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d0c60cc9085547f771925e29c2f723fccc6f964f64d3d64910bb19e85e09e545",
	                    "EndpointID": "c258a7cee0262fe46b7ceea573626cb2c7e681114a6476edc79c6d67a362f08a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "e2:dd:c6:e8:d9:5a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-858659",
	                        "feaf39a3749e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-858659 -n embed-certs-858659
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-858659 logs -n 25
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p force-systemd-flag-610815 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-610815 │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:30 UTC │
	│ ssh     │ force-systemd-flag-610815 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-610815 │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:30 UTC │
	│ delete  │ -p force-systemd-flag-610815                                                                                                                                                                                                                  │ force-systemd-flag-610815 │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:30 UTC │
	│ start   │ -p missing-upgrade-038405 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-038405    │ jenkins │ v1.35.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:31 UTC │
	│ start   │ -p missing-upgrade-038405 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-038405    │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ delete  │ -p missing-upgrade-038405                                                                                                                                                                                                                     │ missing-upgrade-038405    │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ start   │ -p kubernetes-upgrade-605797 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-605797 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:32 UTC │
	│ stop    │ -p kubernetes-upgrade-605797                                                                                                                                                                                                                  │ kubernetes-upgrade-605797 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p kubernetes-upgrade-605797 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-605797 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │                     │
	│ delete  │ -p stopped-upgrade-148693                                                                                                                                                                                                                     │ stopped-upgrade-148693    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p cert-options-319518 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-319518       │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ start   │ -p cert-expiration-673665 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-673665    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ delete  │ -p running-upgrade-299658                                                                                                                                                                                                                     │ running-upgrade-299658    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-743506    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ delete  │ -p cert-expiration-673665                                                                                                                                                                                                                     │ cert-expiration-673665    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-675290         │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ cert-options-319518 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-319518       │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ -p cert-options-319518 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-319518       │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ delete  │ -p cert-options-319518                                                                                                                                                                                                                        │ cert-options-319518       │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-858659        │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-743506 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-743506    │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ stop    │ -p old-k8s-version-743506 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-743506    │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-675290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-675290         │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ stop    │ -p no-preload-675290 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-675290         │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-858659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-858659        │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:33:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:33:14.727394  278144 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:33:14.727527  278144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:33:14.727540  278144 out.go:374] Setting ErrFile to fd 2...
	I1212 00:33:14.727548  278144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:33:14.727835  278144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:33:14.728471  278144 out.go:368] Setting JSON to false
	I1212 00:33:14.730016  278144 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4541,"bootTime":1765495054,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:33:14.730096  278144 start.go:143] virtualization: kvm guest
	I1212 00:33:14.732300  278144 out.go:179] * [embed-certs-858659] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:33:14.733495  278144 notify.go:221] Checking for updates...
	I1212 00:33:14.733499  278144 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:33:14.734863  278144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:33:14.736177  278144 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:33:14.737704  278144 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:33:14.738840  278144 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:33:14.739929  278144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:33:14.741610  278144 config.go:182] Loaded profile config "kubernetes-upgrade-605797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:33:14.741755  278144 config.go:182] Loaded profile config "no-preload-675290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:33:14.741905  278144 config.go:182] Loaded profile config "old-k8s-version-743506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1212 00:33:14.742081  278144 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:33:14.769409  278144 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:33:14.769547  278144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:33:14.824953  278144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-12 00:33:14.815394412 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:33:14.825066  278144 docker.go:319] overlay module found
	I1212 00:33:14.827615  278144 out.go:179] * Using the docker driver based on user configuration
	I1212 00:33:14.828650  278144 start.go:309] selected driver: docker
	I1212 00:33:14.828664  278144 start.go:927] validating driver "docker" against <nil>
	I1212 00:33:14.828675  278144 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:33:14.829458  278144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:33:14.887224  278144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-12 00:33:14.877162845 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:33:14.887403  278144 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 00:33:14.887687  278144 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:33:14.891675  278144 out.go:179] * Using Docker driver with root privileges
	I1212 00:33:14.892895  278144 cni.go:84] Creating CNI manager for ""
	I1212 00:33:14.893000  278144 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:33:14.893016  278144 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:33:14.893119  278144 start.go:353] cluster config:
	{Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:33:14.894469  278144 out.go:179] * Starting "embed-certs-858659" primary control-plane node in "embed-certs-858659" cluster
	I1212 00:33:14.895585  278144 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:33:14.896800  278144 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:33:14.897761  278144 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:33:14.897795  278144 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 00:33:14.897814  278144 cache.go:65] Caching tarball of preloaded images
	I1212 00:33:14.897820  278144 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:33:14.897914  278144 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:33:14.897930  278144 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 00:33:14.898070  278144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/config.json ...
	I1212 00:33:14.898101  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/config.json: {Name:mka3ad5a51f2e77701ec67a66227f8bb0b6994ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:14.921669  278144 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:33:14.921689  278144 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:33:14.921708  278144 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:33:14.921743  278144 start.go:360] acquireMachinesLock for embed-certs-858659: {Name:mk65733daa8eb01c9a3ad2d27b0888c2a1a8b319 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:33:14.921849  278144 start.go:364] duration metric: took 84.758µs to acquireMachinesLock for "embed-certs-858659"
	I1212 00:33:14.921880  278144 start.go:93] Provisioning new machine with config: &{Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:33:14.921967  278144 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:33:11.914966  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:12.414641  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:12.914599  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:13.414343  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:13.914224  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:14.414638  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:14.914709  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:15.414947  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:15.915234  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:16.414519  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:14.795605  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 00:33:14.795652  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:12.673526  272590 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.421983552s)
	I1212 00:33:12.673554  272590 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-10975/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1212 00:33:12.673588  272590 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 00:33:12.673653  272590 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 00:33:14.248095  272590 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.574416538s)
	I1212 00:33:14.248127  272590 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-10975/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1212 00:33:14.248166  272590 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 00:33:14.248233  272590 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 00:33:14.813113  272590 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-10975/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 00:33:14.813155  272590 cache_images.go:125] Successfully loaded all cached images
	I1212 00:33:14.813162  272590 cache_images.go:94] duration metric: took 10.342276233s to LoadCachedImages
	I1212 00:33:14.813176  272590 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 00:33:14.813282  272590 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-675290 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-675290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:33:14.813361  272590 ssh_runner.go:195] Run: crio config
	I1212 00:33:14.863560  272590 cni.go:84] Creating CNI manager for ""
	I1212 00:33:14.863584  272590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:33:14.863605  272590 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:33:14.863636  272590 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-675290 NodeName:no-preload-675290 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:33:14.863772  272590 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-675290"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:33:14.863848  272590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:33:14.873217  272590 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1212 00:33:14.873288  272590 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:33:14.881909  272590 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1212 00:33:14.881961  272590 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22101-10975/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1212 00:33:14.882020  272590 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1212 00:33:14.881963  272590 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22101-10975/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1212 00:33:14.886285  272590 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1212 00:33:14.886316  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1212 00:33:15.884197  272590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:33:15.898090  272590 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1212 00:33:15.901895  272590 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1212 00:33:15.901925  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1212 00:33:16.266026  272590 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1212 00:33:16.270813  272590 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1212 00:33:16.270848  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1212 00:33:16.443667  272590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:33:16.461915  272590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:33:16.477224  272590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:33:16.496004  272590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1212 00:33:16.509653  272590 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:33:16.513544  272590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:33:16.524548  272590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:16.615408  272590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:16.649109  272590 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290 for IP: 192.168.76.2
	I1212 00:33:16.649132  272590 certs.go:195] generating shared ca certs ...
	I1212 00:33:16.649153  272590 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:16.649332  272590 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:33:16.649377  272590 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:33:16.649387  272590 certs.go:257] generating profile certs ...
	I1212 00:33:16.649448  272590 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.key
	I1212 00:33:16.649462  272590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.crt with IP's: []
	I1212 00:33:16.748107  272590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.crt ...
	I1212 00:33:16.748134  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.crt: {Name:mk00022ca9e5428de7e5a583050d69c3c5c2bf31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:16.748331  272590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.key ...
	I1212 00:33:16.748345  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/client.key: {Name:mkd4f1314753e5364a38754983dd2956364020bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:16.748457  272590 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key.56c2be46
	I1212 00:33:16.748482  272590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt.56c2be46 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1212 00:33:17.118681  272590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt.56c2be46 ...
	I1212 00:33:17.118712  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt.56c2be46: {Name:mk0a1028bc5d92431abb55b4e7c2d66cfbf9c8a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:17.118911  272590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key.56c2be46 ...
	I1212 00:33:17.118935  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key.56c2be46: {Name:mk90ce7b5c7ba44e4d4cdb05bfe31ea45a556159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:17.119082  272590 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt.56c2be46 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt
	I1212 00:33:17.119258  272590 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key.56c2be46 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key
	I1212 00:33:17.119356  272590 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.key
	I1212 00:33:17.119383  272590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.crt with IP's: []
	I1212 00:33:17.206397  272590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.crt ...
	I1212 00:33:17.206424  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.crt: {Name:mk4dba249fcf82c557c21e700f31c0a67e228b72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:17.206613  272590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.key ...
	I1212 00:33:17.206637  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.key: {Name:mkd43d2f214a353e18ef7df608cd9a29775c0278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:17.206850  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:33:17.206900  272590 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:33:17.206928  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:33:17.206975  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:33:17.207014  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:33:17.207055  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:33:17.207133  272590 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:33:17.207775  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:33:17.225767  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:33:17.243136  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:33:17.260318  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:33:17.277081  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:33:17.294021  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:33:17.311412  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:33:17.329281  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/no-preload-675290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:33:17.346588  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:33:14.923847  278144 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 00:33:14.924337  278144 start.go:159] libmachine.API.Create for "embed-certs-858659" (driver="docker")
	I1212 00:33:14.924390  278144 client.go:173] LocalClient.Create starting
	I1212 00:33:14.924524  278144 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem
	I1212 00:33:14.924577  278144 main.go:143] libmachine: Decoding PEM data...
	I1212 00:33:14.924604  278144 main.go:143] libmachine: Parsing certificate...
	I1212 00:33:14.924685  278144 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem
	I1212 00:33:14.924718  278144 main.go:143] libmachine: Decoding PEM data...
	I1212 00:33:14.924746  278144 main.go:143] libmachine: Parsing certificate...
	I1212 00:33:14.925302  278144 cli_runner.go:164] Run: docker network inspect embed-certs-858659 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:33:14.950174  278144 cli_runner.go:211] docker network inspect embed-certs-858659 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:33:14.950269  278144 network_create.go:284] running [docker network inspect embed-certs-858659] to gather additional debugging logs...
	I1212 00:33:14.950295  278144 cli_runner.go:164] Run: docker network inspect embed-certs-858659
	W1212 00:33:14.974761  278144 cli_runner.go:211] docker network inspect embed-certs-858659 returned with exit code 1
	I1212 00:33:14.974794  278144 network_create.go:287] error running [docker network inspect embed-certs-858659]: docker network inspect embed-certs-858659: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-858659 not found
	I1212 00:33:14.974816  278144 network_create.go:289] output of [docker network inspect embed-certs-858659]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-858659 not found
	
	** /stderr **
	I1212 00:33:14.975003  278144 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:33:15.001326  278144 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ad72354fdcf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:1e:a7:00:22:62} reservation:<nil>}
	I1212 00:33:15.002281  278144 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-18bf2e8051c8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:8e:8f:a6:9d:0c} reservation:<nil>}
	I1212 00:33:15.003308  278144 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e5251ddf0a35 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:5a:cf:fd:1d:2a} reservation:<nil>}
	I1212 00:33:15.004265  278144 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f766d8223619 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:f6:b4:47:2f:69:da} reservation:<nil>}
	I1212 00:33:15.005079  278144 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f077d203a2ba IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:8e:69:f0:f2:3a:5d} reservation:<nil>}
	I1212 00:33:15.006240  278144 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb2ab0}
	I1212 00:33:15.006276  278144 network_create.go:124] attempt to create docker network embed-certs-858659 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1212 00:33:15.006349  278144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-858659 embed-certs-858659
	I1212 00:33:15.070104  278144 network_create.go:108] docker network embed-certs-858659 192.168.94.0/24 created
	I1212 00:33:15.070132  278144 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-858659" container
	I1212 00:33:15.070189  278144 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:33:15.087641  278144 cli_runner.go:164] Run: docker volume create embed-certs-858659 --label name.minikube.sigs.k8s.io=embed-certs-858659 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:33:15.104140  278144 oci.go:103] Successfully created a docker volume embed-certs-858659
	I1212 00:33:15.104214  278144 cli_runner.go:164] Run: docker run --rm --name embed-certs-858659-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-858659 --entrypoint /usr/bin/test -v embed-certs-858659:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 00:33:16.284491  278144 cli_runner.go:217] Completed: docker run --rm --name embed-certs-858659-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-858659 --entrypoint /usr/bin/test -v embed-certs-858659:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.180222233s)
	I1212 00:33:16.284534  278144 oci.go:107] Successfully prepared a docker volume embed-certs-858659
	I1212 00:33:16.284589  278144 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:33:16.284601  278144 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:33:16.284643  278144 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-858659:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:33:16.914882  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:17.414926  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:17.914528  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:18.414943  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:18.914927  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:19.414842  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:19.914278  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:20.414704  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:20.914717  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:21.414504  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:19.592618  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:49408->192.168.85.2:8443: read: connection reset by peer
	I1212 00:33:19.592823  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:19.593231  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:19.791619  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:19.792087  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:20.291785  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:20.292172  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:20.791618  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:20.791974  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:21.291768  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:21.292177  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:21.790791  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:21.791240  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:17.366333  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:33:17.384570  272590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:33:17.401944  272590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:33:17.414391  272590 ssh_runner.go:195] Run: openssl version
	I1212 00:33:17.421380  272590 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:33:17.428974  272590 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:33:17.436624  272590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:33:17.440514  272590 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:33:17.440571  272590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:33:17.476855  272590 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:33:17.485655  272590 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:33:17.494695  272590 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:33:17.502643  272590 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:33:17.510565  272590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:33:17.514372  272590 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:33:17.514428  272590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:33:17.550028  272590 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:33:17.558154  272590 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:33:17.566696  272590 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:17.574819  272590 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:33:17.584533  272590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:17.588378  272590 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:17.588451  272590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:17.624118  272590 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:33:17.631989  272590 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:33:17.639398  272590 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:33:17.643488  272590 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:33:17.643544  272590 kubeadm.go:401] StartCluster: {Name:no-preload-675290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-675290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:33:17.643628  272590 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:33:17.643690  272590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:33:17.673075  272590 cri.go:89] found id: ""
	I1212 00:33:17.673155  272590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:33:17.681463  272590 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:33:17.689858  272590 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:33:17.689918  272590 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:33:17.697594  272590 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:33:17.697617  272590 kubeadm.go:158] found existing configuration files:
	
	I1212 00:33:17.697658  272590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:33:17.705097  272590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:33:17.705142  272590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:33:17.712562  272590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:33:17.720683  272590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:33:17.720733  272590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:33:17.728201  272590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:33:17.735840  272590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:33:17.735895  272590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:33:17.743333  272590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:33:17.750913  272590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:33:17.750963  272590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:33:17.757854  272590 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:33:17.864943  272590 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 00:33:17.921915  272590 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:33:21.914513  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:22.415154  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:22.914233  270803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:23.013859  270803 kubeadm.go:1114] duration metric: took 11.693512767s to wait for elevateKubeSystemPrivileges
	I1212 00:33:23.013899  270803 kubeadm.go:403] duration metric: took 21.39235223s to StartCluster
	I1212 00:33:23.013922  270803 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:23.014005  270803 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:33:23.015046  270803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:23.015303  270803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:33:23.015318  270803 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:33:23.015380  270803 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:33:23.015468  270803 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-743506"
	I1212 00:33:23.015507  270803 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-743506"
	I1212 00:33:23.015517  270803 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-743506"
	I1212 00:33:23.015542  270803 host.go:66] Checking if "old-k8s-version-743506" exists ...
	I1212 00:33:23.015545  270803 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-743506"
	I1212 00:33:23.015549  270803 config.go:182] Loaded profile config "old-k8s-version-743506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1212 00:33:23.015961  270803 cli_runner.go:164] Run: docker container inspect old-k8s-version-743506 --format={{.State.Status}}
	I1212 00:33:23.016070  270803 cli_runner.go:164] Run: docker container inspect old-k8s-version-743506 --format={{.State.Status}}
	I1212 00:33:23.016790  270803 out.go:179] * Verifying Kubernetes components...
	I1212 00:33:23.018074  270803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:23.042393  270803 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-743506"
	I1212 00:33:23.042558  270803 host.go:66] Checking if "old-k8s-version-743506" exists ...
	I1212 00:33:23.042986  270803 cli_runner.go:164] Run: docker container inspect old-k8s-version-743506 --format={{.State.Status}}
	I1212 00:33:23.043608  270803 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:33:23.044740  270803 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:23.044768  270803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:33:23.044818  270803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743506
	I1212 00:33:23.079331  270803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/old-k8s-version-743506/id_rsa Username:docker}
	I1212 00:33:23.080872  270803 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:23.080897  270803 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:33:23.080957  270803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743506
	I1212 00:33:23.105874  270803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/old-k8s-version-743506/id_rsa Username:docker}
	I1212 00:33:23.153068  270803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:33:23.193030  270803 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:23.203058  270803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:23.226507  270803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:23.451216  270803 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1212 00:33:23.452428  270803 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-743506" to be "Ready" ...
	I1212 00:33:23.654056  270803 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 00:33:20.370047  278144 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-858659:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.085338961s)
	I1212 00:33:20.370083  278144 kic.go:203] duration metric: took 4.085477303s to extract preloaded images to volume ...
	W1212 00:33:20.370184  278144 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 00:33:20.370229  278144 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 00:33:20.370279  278144 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:33:20.431590  278144 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-858659 --name embed-certs-858659 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-858659 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-858659 --network embed-certs-858659 --ip 192.168.94.2 --volume embed-certs-858659:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 00:33:20.738217  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Running}}
	I1212 00:33:20.759182  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:20.778854  278144 cli_runner.go:164] Run: docker exec embed-certs-858659 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:33:20.828668  278144 oci.go:144] the created container "embed-certs-858659" has a running status.
	I1212 00:33:20.828706  278144 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa...
	I1212 00:33:20.965886  278144 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:33:20.996346  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:21.014956  278144 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:33:21.014979  278144 kic_runner.go:114] Args: [docker exec --privileged embed-certs-858659 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:33:21.082712  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:21.106586  278144 machine.go:94] provisionDockerMachine start ...
	I1212 00:33:21.106678  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:21.131359  278144 main.go:143] libmachine: Using SSH client type: native
	I1212 00:33:21.131707  278144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1212 00:33:21.131730  278144 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:33:21.276119  278144 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-858659
	
	I1212 00:33:21.276147  278144 ubuntu.go:182] provisioning hostname "embed-certs-858659"
	I1212 00:33:21.276210  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:21.297386  278144 main.go:143] libmachine: Using SSH client type: native
	I1212 00:33:21.297706  278144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1212 00:33:21.297733  278144 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-858659 && echo "embed-certs-858659" | sudo tee /etc/hostname
	I1212 00:33:21.451952  278144 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-858659
	
	I1212 00:33:21.452044  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:21.476325  278144 main.go:143] libmachine: Using SSH client type: native
	I1212 00:33:21.476652  278144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1212 00:33:21.476682  278144 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-858659' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-858659/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-858659' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:33:21.621125  278144 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:33:21.621157  278144 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:33:21.621206  278144 ubuntu.go:190] setting up certificates
	I1212 00:33:21.621218  278144 provision.go:84] configureAuth start
	I1212 00:33:21.621282  278144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:33:21.642072  278144 provision.go:143] copyHostCerts
	I1212 00:33:21.642136  278144 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:33:21.642150  278144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:33:21.642232  278144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:33:21.642360  278144 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:33:21.642374  278144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:33:21.642414  278144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:33:21.642534  278144 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:33:21.642548  278144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:33:21.642588  278144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:33:21.642676  278144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.embed-certs-858659 san=[127.0.0.1 192.168.94.2 embed-certs-858659 localhost minikube]
	I1212 00:33:21.788738  278144 provision.go:177] copyRemoteCerts
	I1212 00:33:21.788793  278144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:33:21.788830  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:21.806536  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:21.918035  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:33:21.945961  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:33:21.965649  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 00:33:21.983776  278144 provision.go:87] duration metric: took 362.534714ms to configureAuth
	I1212 00:33:21.983806  278144 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:33:21.984002  278144 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:33:21.984122  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:22.014438  278144 main.go:143] libmachine: Using SSH client type: native
	I1212 00:33:22.014755  278144 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1212 00:33:22.014780  278144 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:33:22.307190  278144 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:33:22.307211  278144 machine.go:97] duration metric: took 1.200604199s to provisionDockerMachine
	I1212 00:33:22.307228  278144 client.go:176] duration metric: took 7.38282296s to LocalClient.Create
	I1212 00:33:22.307253  278144 start.go:167] duration metric: took 7.38291887s to libmachine.API.Create "embed-certs-858659"
	I1212 00:33:22.307266  278144 start.go:293] postStartSetup for "embed-certs-858659" (driver="docker")
	I1212 00:33:22.307280  278144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:33:22.307346  278144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:33:22.307394  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:22.325538  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:22.424001  278144 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:33:22.427488  278144 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:33:22.427521  278144 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:33:22.427537  278144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:33:22.427586  278144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:33:22.427662  278144 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:33:22.427750  278144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:33:22.435119  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:33:22.455192  278144 start.go:296] duration metric: took 147.912888ms for postStartSetup
	I1212 00:33:22.455613  278144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:33:22.474728  278144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/config.json ...
	I1212 00:33:22.474959  278144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:33:22.474994  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:22.492769  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:22.583869  278144 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:33:22.588020  278144 start.go:128] duration metric: took 7.666039028s to createHost
	I1212 00:33:22.588046  278144 start.go:83] releasing machines lock for "embed-certs-858659", held for 7.666181656s
	I1212 00:33:22.588106  278144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:33:22.604658  278144 ssh_runner.go:195] Run: cat /version.json
	I1212 00:33:22.604702  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:22.604722  278144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:33:22.604782  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:22.624351  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:22.624645  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:22.766960  278144 ssh_runner.go:195] Run: systemctl --version
	I1212 00:33:22.773040  278144 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:33:22.807594  278144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:33:22.813502  278144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:33:22.813578  278144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:33:22.848889  278144 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:33:22.848911  278144 start.go:496] detecting cgroup driver to use...
	I1212 00:33:22.848946  278144 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:33:22.849004  278144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:33:22.866214  278144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:33:22.883089  278144 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:33:22.883154  278144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:33:22.903153  278144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:33:22.928607  278144 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:33:23.051147  278144 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:33:23.207194  278144 docker.go:234] disabling docker service ...
	I1212 00:33:23.207354  278144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:33:23.232382  278144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:33:23.250204  278144 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:33:23.374364  278144 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:33:23.499465  278144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:33:23.517532  278144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:33:23.538077  278144 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:33:23.538180  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.551014  278144 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:33:23.551078  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.562817  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.574022  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.585051  278144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:33:23.594447  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.605211  278144 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.621829  278144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.632339  278144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:33:23.642118  278144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:33:23.651626  278144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:23.735724  278144 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:33:23.880406  278144 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:33:23.880467  278144 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:33:23.884392  278144 start.go:564] Will wait 60s for crictl version
	I1212 00:33:23.884439  278144 ssh_runner.go:195] Run: which crictl
	I1212 00:33:23.887821  278144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:33:23.911824  278144 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:33:23.911892  278144 ssh_runner.go:195] Run: crio --version
	I1212 00:33:23.938726  278144 ssh_runner.go:195] Run: crio --version
	I1212 00:33:23.967016  278144 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:33:23.968117  278144 cli_runner.go:164] Run: docker network inspect embed-certs-858659 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:33:23.986276  278144 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1212 00:33:23.990186  278144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:33:24.000018  278144 kubeadm.go:884] updating cluster {Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:33:24.000130  278144 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:33:24.000180  278144 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:33:24.032804  278144 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:33:24.032832  278144 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:33:24.032890  278144 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:33:24.062431  278144 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:33:24.062456  278144 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:33:24.062465  278144 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1212 00:33:24.062645  278144 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-858659 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:33:24.062725  278144 ssh_runner.go:195] Run: crio config
	I1212 00:33:24.109010  278144 cni.go:84] Creating CNI manager for ""
	I1212 00:33:24.109037  278144 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:33:24.109054  278144 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:33:24.109075  278144 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-858659 NodeName:embed-certs-858659 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:33:24.109202  278144 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-858659"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:33:24.109260  278144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:33:24.117514  278144 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:33:24.117569  278144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:33:24.125056  278144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1212 00:33:24.137518  278144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:33:24.151599  278144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1212 00:33:24.163535  278144 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:33:24.166948  278144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:33:24.176282  278144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:24.267072  278144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:24.289673  278144 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659 for IP: 192.168.94.2
	I1212 00:33:24.289695  278144 certs.go:195] generating shared ca certs ...
	I1212 00:33:24.289716  278144 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.289894  278144 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:33:24.289977  278144 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:33:24.289996  278144 certs.go:257] generating profile certs ...
	I1212 00:33:24.290069  278144 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.key
	I1212 00:33:24.290095  278144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.crt with IP's: []
	I1212 00:33:24.425621  278144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.crt ...
	I1212 00:33:24.425651  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.crt: {Name:mk45b9fc7c32e03cd8b8b253cee0beecc89168ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.425825  278144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.key ...
	I1212 00:33:24.425841  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.key: {Name:mk7969034f40478ebc3fcd8da2e89e524ba77096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.425960  278144 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key.89584afc
	I1212 00:33:24.425981  278144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt.89584afc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1212 00:33:24.627246  278144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt.89584afc ...
	I1212 00:33:24.627271  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt.89584afc: {Name:mkf363b5d4278a387e18f286b3c76b364b923111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.627425  278144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key.89584afc ...
	I1212 00:33:24.627438  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key.89584afc: {Name:mkb8910c32db51006465f917ed06964af4a9674d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.627524  278144 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt.89584afc -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt
	I1212 00:33:24.627596  278144 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key.89584afc -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key
	I1212 00:33:24.627649  278144 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key
	I1212 00:33:24.627670  278144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.crt with IP's: []
	I1212 00:33:24.683493  278144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.crt ...
	I1212 00:33:24.683515  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.crt: {Name:mk4c83643d0e89a51dc996cf2dabd1ed6bdbf2e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.683642  278144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key ...
	I1212 00:33:24.683664  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key: {Name:mkf320b4e9041cf5c42937a4ade7d266ee3cce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.683853  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:33:24.683891  278144 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:33:24.683898  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:33:24.683920  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:33:24.683943  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:33:24.683966  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:33:24.684004  278144 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:33:24.684593  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:33:24.702599  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:33:24.719710  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:33:25.192170  272590 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:33:25.192235  272590 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:33:25.192361  272590 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:33:25.192432  272590 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 00:33:25.192519  272590 kubeadm.go:319] OS: Linux
	I1212 00:33:25.192568  272590 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:33:25.192609  272590 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:33:25.192664  272590 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:33:25.192706  272590 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:33:25.192747  272590 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:33:25.192786  272590 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:33:25.192831  272590 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:33:25.192889  272590 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 00:33:25.192982  272590 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:33:25.193125  272590 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:33:25.193255  272590 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:33:25.193337  272590 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:33:25.194568  272590 out.go:252]   - Generating certificates and keys ...
	I1212 00:33:25.194653  272590 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:33:25.194746  272590 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:33:25.194846  272590 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:33:25.194937  272590 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:33:25.195025  272590 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:33:25.195094  272590 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:33:25.195179  272590 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:33:25.195303  272590 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-675290] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 00:33:25.195397  272590 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:33:25.195573  272590 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-675290] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 00:33:25.195669  272590 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:33:25.195763  272590 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:33:25.195833  272590 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:33:25.195930  272590 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:33:25.196016  272590 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:33:25.196099  272590 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:33:25.196177  272590 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:33:25.196289  272590 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:33:25.196385  272590 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:33:25.196532  272590 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:33:25.196627  272590 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:33:25.197638  272590 out.go:252]   - Booting up control plane ...
	I1212 00:33:25.197759  272590 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:33:25.197871  272590 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:33:25.197963  272590 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:33:25.198118  272590 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:33:25.198245  272590 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:33:25.198333  272590 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:33:25.198415  272590 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:33:25.198461  272590 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:33:25.198654  272590 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:33:25.198773  272590 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:33:25.198875  272590 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001760288s
	I1212 00:33:25.199015  272590 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:33:25.199122  272590 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1212 00:33:25.199238  272590 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:33:25.199343  272590 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:33:25.199415  272590 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 505.336113ms
	I1212 00:33:25.199497  272590 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.771564525s
	I1212 00:33:25.199552  272590 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501309707s
	I1212 00:33:25.199690  272590 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:33:25.199852  272590 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:33:25.199905  272590 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:33:25.200119  272590 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-675290 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:33:25.200203  272590 kubeadm.go:319] [bootstrap-token] Using token: 6k5hlm.3rv4y6xn4tgjibyr
	I1212 00:33:25.201870  272590 out.go:252]   - Configuring RBAC rules ...
	I1212 00:33:25.201974  272590 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:33:25.202073  272590 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:33:25.202233  272590 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:33:25.202380  272590 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:33:25.202516  272590 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:33:25.202618  272590 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:33:25.202732  272590 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:33:25.202779  272590 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 00:33:25.202821  272590 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 00:33:25.202826  272590 kubeadm.go:319] 
	I1212 00:33:25.202910  272590 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 00:33:25.202924  272590 kubeadm.go:319] 
	I1212 00:33:25.203016  272590 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 00:33:25.203024  272590 kubeadm.go:319] 
	I1212 00:33:25.203058  272590 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 00:33:25.203145  272590 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:33:25.203208  272590 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:33:25.203217  272590 kubeadm.go:319] 
	I1212 00:33:25.203293  272590 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 00:33:25.203301  272590 kubeadm.go:319] 
	I1212 00:33:25.203364  272590 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:33:25.203385  272590 kubeadm.go:319] 
	I1212 00:33:25.203462  272590 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 00:33:25.203594  272590 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:33:25.203668  272590 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:33:25.203679  272590 kubeadm.go:319] 
	I1212 00:33:25.203748  272590 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:33:25.203818  272590 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 00:33:25.203825  272590 kubeadm.go:319] 
	I1212 00:33:25.203893  272590 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6k5hlm.3rv4y6xn4tgjibyr \
	I1212 00:33:25.203984  272590 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f \
	I1212 00:33:25.204004  272590 kubeadm.go:319] 	--control-plane 
	I1212 00:33:25.204007  272590 kubeadm.go:319] 
	I1212 00:33:25.204131  272590 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:33:25.204141  272590 kubeadm.go:319] 
	I1212 00:33:25.204227  272590 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6k5hlm.3rv4y6xn4tgjibyr \
	I1212 00:33:25.204320  272590 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f 
	I1212 00:33:25.204332  272590 cni.go:84] Creating CNI manager for ""
	I1212 00:33:25.204338  272590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:33:25.205594  272590 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 00:33:23.655263  270803 addons.go:530] duration metric: took 639.880413ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 00:33:23.955245  270803 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-743506" context rescaled to 1 replicas
	W1212 00:33:25.457032  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	I1212 00:33:22.290852  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:22.291316  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:22.791613  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:22.791924  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:23.291830  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:23.292241  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:23.791654  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:23.793187  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:24.291592  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:24.291924  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:24.791612  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:24.792009  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:25.291649  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:25.292046  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:25.791751  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:25.792178  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:26.291614  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:26.291985  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:26.791624  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:26.792019  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:25.206570  272590 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:33:25.210928  272590 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1212 00:33:25.210950  272590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 00:33:25.224287  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:33:25.457167  272590 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:33:25.457324  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:25.457418  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-675290 minikube.k8s.io/updated_at=2025_12_12T00_33_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=no-preload-675290 minikube.k8s.io/primary=true
	I1212 00:33:25.476212  272590 ops.go:34] apiserver oom_adj: -16
	I1212 00:33:25.564286  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:26.065329  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:26.564328  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:27.064508  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:24.736576  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:33:24.755063  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 00:33:24.774545  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:33:24.793295  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:33:24.812705  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:33:24.830500  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:33:24.850525  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:33:24.868692  278144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:33:24.886268  278144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:33:24.897891  278144 ssh_runner.go:195] Run: openssl version
	I1212 00:33:24.904042  278144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:33:24.911176  278144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:33:24.918098  278144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:33:24.921629  278144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:33:24.921676  278144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:33:24.959657  278144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:33:24.967294  278144 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:33:24.975343  278144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:24.982273  278144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:33:24.990092  278144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:24.993779  278144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:24.993827  278144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:25.031998  278144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:33:25.039700  278144 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:33:25.048043  278144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:33:25.056012  278144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:33:25.064509  278144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:33:25.069175  278144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:33:25.069228  278144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:33:25.108584  278144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:33:25.117805  278144 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:33:25.124959  278144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:33:25.128496  278144 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:33:25.128549  278144 kubeadm.go:401] StartCluster: {Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:33:25.128616  278144 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:33:25.128656  278144 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:33:25.152735  278144 cri.go:89] found id: ""
	I1212 00:33:25.152791  278144 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:33:25.160002  278144 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:33:25.168186  278144 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:33:25.168235  278144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:33:25.175527  278144 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:33:25.175541  278144 kubeadm.go:158] found existing configuration files:
	
	I1212 00:33:25.175574  278144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:33:25.182726  278144 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:33:25.182769  278144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:33:25.190663  278144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:33:25.198827  278144 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:33:25.198870  278144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:33:25.206417  278144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:33:25.214331  278144 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:33:25.214380  278144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:33:25.221328  278144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:33:25.228771  278144 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:33:25.228812  278144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:33:25.236619  278144 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:33:25.298265  278144 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 00:33:25.365354  278144 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:33:27.564979  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:28.064794  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:28.564602  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:29.064565  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:29.565218  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:30.064844  272590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:30.133149  272590 kubeadm.go:1114] duration metric: took 4.675879902s to wait for elevateKubeSystemPrivileges
	I1212 00:33:30.133195  272590 kubeadm.go:403] duration metric: took 12.489653684s to StartCluster
	I1212 00:33:30.133220  272590 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:30.133290  272590 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:33:30.134407  272590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:30.145606  272590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:33:30.145618  272590 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:33:30.145650  272590 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:33:30.145741  272590 addons.go:70] Setting storage-provisioner=true in profile "no-preload-675290"
	I1212 00:33:30.145762  272590 addons.go:239] Setting addon storage-provisioner=true in "no-preload-675290"
	I1212 00:33:30.145799  272590 host.go:66] Checking if "no-preload-675290" exists ...
	I1212 00:33:30.145837  272590 config.go:182] Loaded profile config "no-preload-675290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:33:30.145762  272590 addons.go:70] Setting default-storageclass=true in profile "no-preload-675290"
	I1212 00:33:30.145893  272590 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-675290"
	I1212 00:33:30.146227  272590 cli_runner.go:164] Run: docker container inspect no-preload-675290 --format={{.State.Status}}
	I1212 00:33:30.146407  272590 cli_runner.go:164] Run: docker container inspect no-preload-675290 --format={{.State.Status}}
	I1212 00:33:30.208678  272590 addons.go:239] Setting addon default-storageclass=true in "no-preload-675290"
	I1212 00:33:30.208723  272590 host.go:66] Checking if "no-preload-675290" exists ...
	I1212 00:33:30.209146  272590 cli_runner.go:164] Run: docker container inspect no-preload-675290 --format={{.State.Status}}
	I1212 00:33:30.209555  272590 out.go:179] * Verifying Kubernetes components...
	I1212 00:33:30.228632  272590 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:30.229410  272590 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:33:30.229504  272590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-675290
	I1212 00:33:30.230540  272590 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:33:30.231654  272590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:30.232880  272590 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:30.232898  272590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:33:30.232958  272590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-675290
	I1212 00:33:30.241902  272590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:33:30.255977  272590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/no-preload-675290/id_rsa Username:docker}
	I1212 00:33:30.260883  272590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/no-preload-675290/id_rsa Username:docker}
	I1212 00:33:30.396215  272590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:30.411003  272590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:30.418969  272590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:30.499202  272590 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1212 00:33:30.732276  272590 node_ready.go:35] waiting up to 6m0s for node "no-preload-675290" to be "Ready" ...
	I1212 00:33:30.733577  272590 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1212 00:33:27.955456  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	W1212 00:33:30.457818  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	I1212 00:33:27.291880  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:27.292353  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:27.790970  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:27.791329  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:28.290880  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:28.291231  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:28.790864  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:28.791237  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:29.290866  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:29.291244  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:29.790872  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:29.791234  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:30.291461  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:30.291886  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:30.791224  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:30.734544  272590 addons.go:530] duration metric: took 588.890424ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1212 00:33:31.004952  272590 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-675290" context rescaled to 1 replicas
	W1212 00:33:32.955094  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	W1212 00:33:34.955638  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	I1212 00:33:35.791724  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 00:33:35.791770  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:37.194116  278144 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 00:33:37.194203  278144 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:33:37.194314  278144 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:33:37.194389  278144 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 00:33:37.194458  278144 kubeadm.go:319] OS: Linux
	I1212 00:33:37.194544  278144 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:33:37.194613  278144 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:33:37.194687  278144 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:33:37.194756  278144 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:33:37.194825  278144 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:33:37.194905  278144 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:33:37.194979  278144 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:33:37.195045  278144 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 00:33:37.195123  278144 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:33:37.195248  278144 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:33:37.195386  278144 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:33:37.195465  278144 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:33:37.196692  278144 out.go:252]   - Generating certificates and keys ...
	I1212 00:33:37.196747  278144 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:33:37.196815  278144 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:33:37.196870  278144 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:33:37.196966  278144 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:33:37.197058  278144 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:33:37.197125  278144 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:33:37.197200  278144 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:33:37.197369  278144 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-858659 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 00:33:37.197413  278144 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:33:37.197552  278144 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-858659 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 00:33:37.197607  278144 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:33:37.197660  278144 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:33:37.197696  278144 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:33:37.197749  278144 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:33:37.197790  278144 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:33:37.197879  278144 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:33:37.197966  278144 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:33:37.198076  278144 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:33:37.198145  278144 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:33:37.198253  278144 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:33:37.198355  278144 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:33:37.199406  278144 out.go:252]   - Booting up control plane ...
	I1212 00:33:37.199517  278144 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:33:37.199604  278144 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:33:37.199697  278144 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:33:37.199790  278144 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:33:37.199864  278144 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:33:37.199961  278144 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:33:37.200052  278144 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:33:37.200115  278144 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:33:37.200307  278144 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:33:37.200411  278144 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:33:37.200521  278144 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001833759s
	I1212 00:33:37.200639  278144 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:33:37.200752  278144 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1212 00:33:37.200863  278144 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:33:37.200962  278144 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:33:37.201076  278144 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.434651471s
	I1212 00:33:37.201168  278144 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.026988685s
	I1212 00:33:37.201264  278144 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00162745s
	I1212 00:33:37.201389  278144 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:33:37.201534  278144 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:33:37.201603  278144 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:33:37.201804  278144 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-858659 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:33:37.201887  278144 kubeadm.go:319] [bootstrap-token] Using token: ggt060.eefg72qnn6nqw2lf
	I1212 00:33:37.203122  278144 out.go:252]   - Configuring RBAC rules ...
	I1212 00:33:37.203215  278144 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:33:37.203290  278144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:33:37.203414  278144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:33:37.203586  278144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:33:37.203694  278144 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:33:37.203763  278144 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:33:37.203865  278144 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:33:37.203907  278144 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 00:33:37.203946  278144 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 00:33:37.203952  278144 kubeadm.go:319] 
	I1212 00:33:37.204010  278144 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 00:33:37.204016  278144 kubeadm.go:319] 
	I1212 00:33:37.204085  278144 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 00:33:37.204091  278144 kubeadm.go:319] 
	I1212 00:33:37.204114  278144 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 00:33:37.204164  278144 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:33:37.204212  278144 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:33:37.204218  278144 kubeadm.go:319] 
	I1212 00:33:37.204259  278144 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 00:33:37.204264  278144 kubeadm.go:319] 
	I1212 00:33:37.204300  278144 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:33:37.204305  278144 kubeadm.go:319] 
	I1212 00:33:37.204349  278144 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 00:33:37.204441  278144 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:33:37.204532  278144 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:33:37.204545  278144 kubeadm.go:319] 
	I1212 00:33:37.204614  278144 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:33:37.204686  278144 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 00:33:37.204691  278144 kubeadm.go:319] 
	I1212 00:33:37.204771  278144 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ggt060.eefg72qnn6nqw2lf \
	I1212 00:33:37.204864  278144 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f \
	I1212 00:33:37.204883  278144 kubeadm.go:319] 	--control-plane 
	I1212 00:33:37.204886  278144 kubeadm.go:319] 
	I1212 00:33:37.204951  278144 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:33:37.204957  278144 kubeadm.go:319] 
	I1212 00:33:37.205046  278144 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ggt060.eefg72qnn6nqw2lf \
	I1212 00:33:37.205184  278144 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f 
	I1212 00:33:37.205197  278144 cni.go:84] Creating CNI manager for ""
	I1212 00:33:37.205206  278144 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:33:37.206375  278144 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1212 00:33:32.734941  272590 node_ready.go:57] node "no-preload-675290" has "Ready":"False" status (will retry)
	W1212 00:33:34.735837  272590 node_ready.go:57] node "no-preload-675290" has "Ready":"False" status (will retry)
	W1212 00:33:37.235035  272590 node_ready.go:57] node "no-preload-675290" has "Ready":"False" status (will retry)
	I1212 00:33:37.207280  278144 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:33:37.211289  278144 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 00:33:37.211304  278144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 00:33:37.224403  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:33:37.418751  278144 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:33:37.418835  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:37.418886  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-858659 minikube.k8s.io/updated_at=2025_12_12T00_33_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=embed-certs-858659 minikube.k8s.io/primary=true
	I1212 00:33:37.430124  278144 ops.go:34] apiserver oom_adj: -16
	I1212 00:33:37.494810  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:37.995607  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:38.495663  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:38.995202  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:39.495603  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1212 00:33:36.956141  270803 node_ready.go:57] node "old-k8s-version-743506" has "Ready":"False" status (will retry)
	I1212 00:33:38.455307  270803 node_ready.go:49] node "old-k8s-version-743506" is "Ready"
	I1212 00:33:38.455334  270803 node_ready.go:38] duration metric: took 15.002846535s for node "old-k8s-version-743506" to be "Ready" ...
	I1212 00:33:38.455348  270803 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:33:38.455398  270803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:33:38.467187  270803 api_server.go:72] duration metric: took 15.451831949s to wait for apiserver process to appear ...
	I1212 00:33:38.467214  270803 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:33:38.467240  270803 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:33:38.471174  270803 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1212 00:33:38.474455  270803 api_server.go:141] control plane version: v1.28.0
	I1212 00:33:38.474507  270803 api_server.go:131] duration metric: took 7.285296ms to wait for apiserver health ...
	I1212 00:33:38.474518  270803 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:33:38.478117  270803 system_pods.go:59] 8 kube-system pods found
	I1212 00:33:38.478153  270803 system_pods.go:61] "coredns-5dd5756b68-nxwdc" [e73711a2-208b-41a6-a47f-6253638cfdf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:38.478162  270803 system_pods.go:61] "etcd-old-k8s-version-743506" [945b4dc3-f44c-48ec-8a03-ec0d012b5e09] Running
	I1212 00:33:38.478171  270803 system_pods.go:61] "kindnet-s2gvw" [21d0881b-1da3-4a1d-967d-8f108d5d8a1f] Running
	I1212 00:33:38.478177  270803 system_pods.go:61] "kube-apiserver-old-k8s-version-743506" [13ddd011-c962-4b44-8b26-b80bf8df1e13] Running
	I1212 00:33:38.478183  270803 system_pods.go:61] "kube-controller-manager-old-k8s-version-743506" [3df133c8-f899-4f35-b4bf-d1849a7262e7] Running
	I1212 00:33:38.478189  270803 system_pods.go:61] "kube-proxy-pz8kt" [671c52f4-19ce-4b7b-8e97-e43f64cd4aeb] Running
	I1212 00:33:38.478195  270803 system_pods.go:61] "kube-scheduler-old-k8s-version-743506" [3e6b9a7d-5346-47dd-af74-f9f8b6904163] Running
	I1212 00:33:38.478204  270803 system_pods.go:61] "storage-provisioner" [ccc4d4a0-b9c6-4653-90dc-113128acc782] Running
	I1212 00:33:38.478211  270803 system_pods.go:74] duration metric: took 3.685843ms to wait for pod list to return data ...
	I1212 00:33:38.478222  270803 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:33:38.480216  270803 default_sa.go:45] found service account: "default"
	I1212 00:33:38.480238  270803 default_sa.go:55] duration metric: took 2.008735ms for default service account to be created ...
	I1212 00:33:38.480247  270803 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:33:38.483031  270803 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:38.483061  270803 system_pods.go:89] "coredns-5dd5756b68-nxwdc" [e73711a2-208b-41a6-a47f-6253638cfdf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:38.483070  270803 system_pods.go:89] "etcd-old-k8s-version-743506" [945b4dc3-f44c-48ec-8a03-ec0d012b5e09] Running
	I1212 00:33:38.483077  270803 system_pods.go:89] "kindnet-s2gvw" [21d0881b-1da3-4a1d-967d-8f108d5d8a1f] Running
	I1212 00:33:38.483087  270803 system_pods.go:89] "kube-apiserver-old-k8s-version-743506" [13ddd011-c962-4b44-8b26-b80bf8df1e13] Running
	I1212 00:33:38.483097  270803 system_pods.go:89] "kube-controller-manager-old-k8s-version-743506" [3df133c8-f899-4f35-b4bf-d1849a7262e7] Running
	I1212 00:33:38.483104  270803 system_pods.go:89] "kube-proxy-pz8kt" [671c52f4-19ce-4b7b-8e97-e43f64cd4aeb] Running
	I1212 00:33:38.483109  270803 system_pods.go:89] "kube-scheduler-old-k8s-version-743506" [3e6b9a7d-5346-47dd-af74-f9f8b6904163] Running
	I1212 00:33:38.483118  270803 system_pods.go:89] "storage-provisioner" [ccc4d4a0-b9c6-4653-90dc-113128acc782] Running
	I1212 00:33:38.483126  270803 system_pods.go:126] duration metric: took 2.872475ms to wait for k8s-apps to be running ...
	I1212 00:33:38.483137  270803 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:33:38.483182  270803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:33:38.496061  270803 system_svc.go:56] duration metric: took 12.919771ms WaitForService to wait for kubelet
	I1212 00:33:38.496082  270803 kubeadm.go:587] duration metric: took 15.480732762s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:33:38.496102  270803 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:33:38.497977  270803 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:33:38.497995  270803 node_conditions.go:123] node cpu capacity is 8
	I1212 00:33:38.498010  270803 node_conditions.go:105] duration metric: took 1.903088ms to run NodePressure ...
	I1212 00:33:38.498022  270803 start.go:242] waiting for startup goroutines ...
	I1212 00:33:38.498030  270803 start.go:247] waiting for cluster config update ...
	I1212 00:33:38.498043  270803 start.go:256] writing updated cluster config ...
	I1212 00:33:38.498328  270803 ssh_runner.go:195] Run: rm -f paused
	I1212 00:33:38.501782  270803 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:33:38.505265  270803 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-nxwdc" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.510596  270803 pod_ready.go:94] pod "coredns-5dd5756b68-nxwdc" is "Ready"
	I1212 00:33:39.510619  270803 pod_ready.go:86] duration metric: took 1.005335277s for pod "coredns-5dd5756b68-nxwdc" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.513364  270803 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.517516  270803 pod_ready.go:94] pod "etcd-old-k8s-version-743506" is "Ready"
	I1212 00:33:39.517534  270803 pod_ready.go:86] duration metric: took 4.146163ms for pod "etcd-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.520326  270803 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.524429  270803 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-743506" is "Ready"
	I1212 00:33:39.524446  270803 pod_ready.go:86] duration metric: took 4.103471ms for pod "kube-apiserver-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.527203  270803 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.709024  270803 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-743506" is "Ready"
	I1212 00:33:39.709046  270803 pod_ready.go:86] duration metric: took 181.825306ms for pod "kube-controller-manager-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:39.909520  270803 pod_ready.go:83] waiting for pod "kube-proxy-pz8kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:40.308323  270803 pod_ready.go:94] pod "kube-proxy-pz8kt" is "Ready"
	I1212 00:33:40.308348  270803 pod_ready.go:86] duration metric: took 398.805252ms for pod "kube-proxy-pz8kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:40.509941  270803 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:40.908891  270803 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-743506" is "Ready"
	I1212 00:33:40.908911  270803 pod_ready.go:86] duration metric: took 398.947384ms for pod "kube-scheduler-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:40.908922  270803 pod_ready.go:40] duration metric: took 2.407114173s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:33:40.958106  270803 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1212 00:33:40.959982  270803 out.go:203] 
	W1212 00:33:40.961226  270803 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1212 00:33:40.962401  270803 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1212 00:33:40.963860  270803 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-743506" cluster and "default" namespace by default
	I1212 00:33:40.795082  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 00:33:40.795136  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:33:40.795187  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:33:40.820906  263844 cri.go:89] found id: "e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:33:40.820925  263844 cri.go:89] found id: "e5dcbdf2edc04b11a39a16ad0e5bded31475ff0b3d269fa7ad2a0b58adf37254"
	I1212 00:33:40.820929  263844 cri.go:89] found id: ""
	I1212 00:33:40.820936  263844 logs.go:282] 2 containers: [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106 e5dcbdf2edc04b11a39a16ad0e5bded31475ff0b3d269fa7ad2a0b58adf37254]
	I1212 00:33:40.820987  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:40.824897  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:40.828680  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:33:40.828744  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:33:40.854459  263844 cri.go:89] found id: ""
	I1212 00:33:40.854509  263844 logs.go:282] 0 containers: []
	W1212 00:33:40.854518  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:33:40.854526  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:33:40.854579  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:33:40.881533  263844 cri.go:89] found id: ""
	I1212 00:33:40.881555  263844 logs.go:282] 0 containers: []
	W1212 00:33:40.881564  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:33:40.881572  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:33:40.881630  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:33:40.906345  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:33:40.906365  263844 cri.go:89] found id: ""
	I1212 00:33:40.906374  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:33:40.906435  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:40.910519  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:33:40.910577  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:33:40.939434  263844 cri.go:89] found id: ""
	I1212 00:33:40.939463  263844 logs.go:282] 0 containers: []
	W1212 00:33:40.939499  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:33:40.939509  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:33:40.939555  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:33:40.966792  263844 cri.go:89] found id: "b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0"
	I1212 00:33:40.966812  263844 cri.go:89] found id: "962f34c2233cccd7ae7eb721e09c901ae4656fc23b8aa6f3a2f3688152708c6e"
	I1212 00:33:40.966818  263844 cri.go:89] found id: ""
	I1212 00:33:40.966826  263844 logs.go:282] 2 containers: [b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0 962f34c2233cccd7ae7eb721e09c901ae4656fc23b8aa6f3a2f3688152708c6e]
	I1212 00:33:40.966878  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:40.970829  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:40.974717  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:33:40.974776  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:33:41.005958  263844 cri.go:89] found id: ""
	I1212 00:33:41.005985  263844 logs.go:282] 0 containers: []
	W1212 00:33:41.005996  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:33:41.006006  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:33:41.006056  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:33:41.039359  263844 cri.go:89] found id: ""
	I1212 00:33:41.039388  263844 logs.go:282] 0 containers: []
	W1212 00:33:41.039399  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:33:41.039418  263844 logs.go:123] Gathering logs for kube-controller-manager [962f34c2233cccd7ae7eb721e09c901ae4656fc23b8aa6f3a2f3688152708c6e] ...
	I1212 00:33:41.039433  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 962f34c2233cccd7ae7eb721e09c901ae4656fc23b8aa6f3a2f3688152708c6e"
	I1212 00:33:41.071692  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:33:41.071722  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:33:41.114665  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:33:41.114701  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:33:41.158370  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:33:41.158395  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:33:41.172978  263844 logs.go:123] Gathering logs for kube-apiserver [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106] ...
	I1212 00:33:41.173006  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:33:41.203032  263844 logs.go:123] Gathering logs for kube-apiserver [e5dcbdf2edc04b11a39a16ad0e5bded31475ff0b3d269fa7ad2a0b58adf37254] ...
	I1212 00:33:41.203057  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5dcbdf2edc04b11a39a16ad0e5bded31475ff0b3d269fa7ad2a0b58adf37254"
	I1212 00:33:41.234616  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:33:41.234651  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:33:41.261883  263844 logs.go:123] Gathering logs for kube-controller-manager [b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0] ...
	I1212 00:33:41.261908  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0"
	I1212 00:33:41.288586  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:33:41.288610  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:33:41.345462  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:33:41.345501  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:33:39.235689  272590 node_ready.go:57] node "no-preload-675290" has "Ready":"False" status (will retry)
	W1212 00:33:41.735356  272590 node_ready.go:57] node "no-preload-675290" has "Ready":"False" status (will retry)
	I1212 00:33:39.995687  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:40.495092  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:40.995851  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:41.495552  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:41.995217  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:42.495439  278144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:33:42.557808  278144 kubeadm.go:1114] duration metric: took 5.139031528s to wait for elevateKubeSystemPrivileges
	I1212 00:33:42.557850  278144 kubeadm.go:403] duration metric: took 17.429303229s to StartCluster
	I1212 00:33:42.557872  278144 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:42.557936  278144 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:33:42.559776  278144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:42.560013  278144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:33:42.560028  278144 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:33:42.560006  278144 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:33:42.560108  278144 addons.go:70] Setting default-storageclass=true in profile "embed-certs-858659"
	I1212 00:33:42.560154  278144 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-858659"
	I1212 00:33:42.560226  278144 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:33:42.560101  278144 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-858659"
	I1212 00:33:42.560305  278144 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-858659"
	I1212 00:33:42.560343  278144 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:33:42.560558  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:42.560804  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:42.562345  278144 out.go:179] * Verifying Kubernetes components...
	I1212 00:33:42.563485  278144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:42.582693  278144 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:33:42.583659  278144 addons.go:239] Setting addon default-storageclass=true in "embed-certs-858659"
	I1212 00:33:42.583715  278144 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:33:42.584226  278144 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:33:42.586890  278144 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:42.586913  278144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:33:42.586987  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:42.611417  278144 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:42.611444  278144 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:33:42.611665  278144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:33:42.616944  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:42.634991  278144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:33:42.644809  278144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:33:42.696105  278144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:42.731013  278144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:33:42.747779  278144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:33:42.816002  278144 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1212 00:33:42.817999  278144 node_ready.go:35] waiting up to 6m0s for node "embed-certs-858659" to be "Ready" ...
	I1212 00:33:43.019945  278144 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 00:33:43.021070  278144 addons.go:530] duration metric: took 461.030823ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 00:33:43.319944  278144 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-858659" context rescaled to 1 replicas
	I1212 00:33:43.734653  272590 node_ready.go:49] node "no-preload-675290" is "Ready"
	I1212 00:33:43.734705  272590 node_ready.go:38] duration metric: took 13.002376355s for node "no-preload-675290" to be "Ready" ...
	I1212 00:33:43.734724  272590 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:33:43.734797  272590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:33:43.750081  272590 api_server.go:72] duration metric: took 13.604432741s to wait for apiserver process to appear ...
	I1212 00:33:43.750104  272590 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:33:43.750123  272590 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:33:43.755396  272590 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 00:33:43.756073  272590 api_server.go:141] control plane version: v1.35.0-beta.0
	I1212 00:33:43.756093  272590 api_server.go:131] duration metric: took 5.983405ms to wait for apiserver health ...
	I1212 00:33:43.756101  272590 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:33:43.759052  272590 system_pods.go:59] 8 kube-system pods found
	I1212 00:33:43.759083  272590 system_pods.go:61] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:43.759090  272590 system_pods.go:61] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running
	I1212 00:33:43.759097  272590 system_pods.go:61] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:33:43.759103  272590 system_pods.go:61] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running
	I1212 00:33:43.759118  272590 system_pods.go:61] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running
	I1212 00:33:43.759126  272590 system_pods.go:61] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:33:43.759132  272590 system_pods.go:61] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running
	I1212 00:33:43.759150  272590 system_pods.go:61] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:43.759160  272590 system_pods.go:74] duration metric: took 3.053065ms to wait for pod list to return data ...
	I1212 00:33:43.759171  272590 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:33:43.761049  272590 default_sa.go:45] found service account: "default"
	I1212 00:33:43.761074  272590 default_sa.go:55] duration metric: took 1.895794ms for default service account to be created ...
	I1212 00:33:43.761081  272590 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:33:43.763348  272590 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:43.763384  272590 system_pods.go:89] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:43.763391  272590 system_pods.go:89] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running
	I1212 00:33:43.763399  272590 system_pods.go:89] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:33:43.763404  272590 system_pods.go:89] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running
	I1212 00:33:43.763419  272590 system_pods.go:89] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running
	I1212 00:33:43.763425  272590 system_pods.go:89] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:33:43.763430  272590 system_pods.go:89] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running
	I1212 00:33:43.763439  272590 system_pods.go:89] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:43.763507  272590 retry.go:31] will retry after 264.298758ms: missing components: kube-dns
	I1212 00:33:44.031211  272590 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:44.031246  272590 system_pods.go:89] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:44.031253  272590 system_pods.go:89] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running
	I1212 00:33:44.031262  272590 system_pods.go:89] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:33:44.031268  272590 system_pods.go:89] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running
	I1212 00:33:44.031276  272590 system_pods.go:89] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running
	I1212 00:33:44.031281  272590 system_pods.go:89] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:33:44.031286  272590 system_pods.go:89] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running
	I1212 00:33:44.031294  272590 system_pods.go:89] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:44.031311  272590 retry.go:31] will retry after 311.660302ms: missing components: kube-dns
	I1212 00:33:44.346179  272590 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:44.346210  272590 system_pods.go:89] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:44.346216  272590 system_pods.go:89] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running
	I1212 00:33:44.346220  272590 system_pods.go:89] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:33:44.346224  272590 system_pods.go:89] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running
	I1212 00:33:44.346229  272590 system_pods.go:89] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running
	I1212 00:33:44.346232  272590 system_pods.go:89] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:33:44.346235  272590 system_pods.go:89] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running
	I1212 00:33:44.346240  272590 system_pods.go:89] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:44.346254  272590 retry.go:31] will retry after 325.219552ms: missing components: kube-dns
	I1212 00:33:44.674796  272590 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:44.674821  272590 system_pods.go:89] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Running
	I1212 00:33:44.674827  272590 system_pods.go:89] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running
	I1212 00:33:44.674831  272590 system_pods.go:89] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:33:44.674834  272590 system_pods.go:89] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running
	I1212 00:33:44.674839  272590 system_pods.go:89] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running
	I1212 00:33:44.674842  272590 system_pods.go:89] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:33:44.674847  272590 system_pods.go:89] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running
	I1212 00:33:44.674850  272590 system_pods.go:89] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Running
	I1212 00:33:44.674858  272590 system_pods.go:126] duration metric: took 913.771066ms to wait for k8s-apps to be running ...
	I1212 00:33:44.674868  272590 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:33:44.674910  272590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:33:44.687374  272590 system_svc.go:56] duration metric: took 12.493284ms WaitForService to wait for kubelet
	I1212 00:33:44.687396  272590 kubeadm.go:587] duration metric: took 14.541752044s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:33:44.687415  272590 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:33:44.689839  272590 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:33:44.689859  272590 node_conditions.go:123] node cpu capacity is 8
	I1212 00:33:44.689872  272590 node_conditions.go:105] duration metric: took 2.452154ms to run NodePressure ...
	I1212 00:33:44.689883  272590 start.go:242] waiting for startup goroutines ...
	I1212 00:33:44.689889  272590 start.go:247] waiting for cluster config update ...
	I1212 00:33:44.689899  272590 start.go:256] writing updated cluster config ...
	I1212 00:33:44.690128  272590 ssh_runner.go:195] Run: rm -f paused
	I1212 00:33:44.693896  272590 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:33:44.696772  272590 pod_ready.go:83] waiting for pod "coredns-7d764666f9-44t4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.700545  272590 pod_ready.go:94] pod "coredns-7d764666f9-44t4m" is "Ready"
	I1212 00:33:44.700562  272590 pod_ready.go:86] duration metric: took 3.773553ms for pod "coredns-7d764666f9-44t4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.702235  272590 pod_ready.go:83] waiting for pod "etcd-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.705546  272590 pod_ready.go:94] pod "etcd-no-preload-675290" is "Ready"
	I1212 00:33:44.705562  272590 pod_ready.go:86] duration metric: took 3.311339ms for pod "etcd-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.707183  272590 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.710720  272590 pod_ready.go:94] pod "kube-apiserver-no-preload-675290" is "Ready"
	I1212 00:33:44.710738  272590 pod_ready.go:86] duration metric: took 3.539875ms for pod "kube-apiserver-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:44.712525  272590 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:45.097959  272590 pod_ready.go:94] pod "kube-controller-manager-no-preload-675290" is "Ready"
	I1212 00:33:45.097987  272590 pod_ready.go:86] duration metric: took 385.439817ms for pod "kube-controller-manager-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:45.297886  272590 pod_ready.go:83] waiting for pod "kube-proxy-7pxpp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:45.697387  272590 pod_ready.go:94] pod "kube-proxy-7pxpp" is "Ready"
	I1212 00:33:45.697415  272590 pod_ready.go:86] duration metric: took 399.505552ms for pod "kube-proxy-7pxpp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:45.898558  272590 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:46.298339  272590 pod_ready.go:94] pod "kube-scheduler-no-preload-675290" is "Ready"
	I1212 00:33:46.298363  272590 pod_ready.go:86] duration metric: took 399.784653ms for pod "kube-scheduler-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:46.298375  272590 pod_ready.go:40] duration metric: took 1.604453206s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:33:46.341140  272590 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1212 00:33:46.342768  272590 out.go:179] * Done! kubectl is now configured to use "no-preload-675290" cluster and "default" namespace by default
	W1212 00:33:44.820992  278144 node_ready.go:57] node "embed-certs-858659" has "Ready":"False" status (will retry)
	W1212 00:33:47.321019  278144 node_ready.go:57] node "embed-certs-858659" has "Ready":"False" status (will retry)
	I1212 00:33:51.580220  263844 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.234699713s)
	W1212 00:33:51.580261  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:43158->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:43158->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	W1212 00:33:49.821281  278144 node_ready.go:57] node "embed-certs-858659" has "Ready":"False" status (will retry)
	W1212 00:33:52.320588  278144 node_ready.go:57] node "embed-certs-858659" has "Ready":"False" status (will retry)
	I1212 00:33:53.820142  278144 node_ready.go:49] node "embed-certs-858659" is "Ready"
	I1212 00:33:53.820171  278144 node_ready.go:38] duration metric: took 11.00213947s for node "embed-certs-858659" to be "Ready" ...
	I1212 00:33:53.820184  278144 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:33:53.820228  278144 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:33:53.832052  278144 api_server.go:72] duration metric: took 11.271920587s to wait for apiserver process to appear ...
	I1212 00:33:53.832072  278144 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:33:53.832087  278144 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 00:33:53.835866  278144 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1212 00:33:53.836816  278144 api_server.go:141] control plane version: v1.34.2
	I1212 00:33:53.836852  278144 api_server.go:131] duration metric: took 4.772765ms to wait for apiserver health ...
	I1212 00:33:53.836863  278144 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:33:53.839734  278144 system_pods.go:59] 8 kube-system pods found
	I1212 00:33:53.839760  278144 system_pods.go:61] "coredns-66bc5c9577-8x66p" [1e3ac279-c897-4100-aa49-a94ed95d1b5a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:53.839765  278144 system_pods.go:61] "etcd-embed-certs-858659" [a6f85bfe-d87e-4cd1-a34a-6517b835d825] Running
	I1212 00:33:53.839772  278144 system_pods.go:61] "kindnet-9jvdg" [295eca47-46bb-43bf-981b-7320ba579410] Running
	I1212 00:33:53.839776  278144 system_pods.go:61] "kube-apiserver-embed-certs-858659" [b0ed7f59-ed08-4325-a71e-b6fc09d2f588] Running
	I1212 00:33:53.839780  278144 system_pods.go:61] "kube-controller-manager-embed-certs-858659" [c5be6475-d2da-40b2-97eb-4df8cd8a51db] Running
	I1212 00:33:53.839783  278144 system_pods.go:61] "kube-proxy-httpr" [d6220e54-3a3a-4fbe-94e1-d0117757204a] Running
	I1212 00:33:53.839786  278144 system_pods.go:61] "kube-scheduler-embed-certs-858659" [2c4596a5-b170-4c02-93ef-32fac96600c7] Running
	I1212 00:33:53.839800  278144 system_pods.go:61] "storage-provisioner" [1e3f7607-a2f4-4ca4-84c0-8cffb038ee03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:53.839809  278144 system_pods.go:74] duration metric: took 2.9403ms to wait for pod list to return data ...
	I1212 00:33:53.839816  278144 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:33:53.841654  278144 default_sa.go:45] found service account: "default"
	I1212 00:33:53.841674  278144 default_sa.go:55] duration metric: took 1.848813ms for default service account to be created ...
	I1212 00:33:53.841685  278144 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:33:53.847152  278144 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:53.847189  278144 system_pods.go:89] "coredns-66bc5c9577-8x66p" [1e3ac279-c897-4100-aa49-a94ed95d1b5a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:53.847196  278144 system_pods.go:89] "etcd-embed-certs-858659" [a6f85bfe-d87e-4cd1-a34a-6517b835d825] Running
	I1212 00:33:53.847205  278144 system_pods.go:89] "kindnet-9jvdg" [295eca47-46bb-43bf-981b-7320ba579410] Running
	I1212 00:33:53.847211  278144 system_pods.go:89] "kube-apiserver-embed-certs-858659" [b0ed7f59-ed08-4325-a71e-b6fc09d2f588] Running
	I1212 00:33:53.847217  278144 system_pods.go:89] "kube-controller-manager-embed-certs-858659" [c5be6475-d2da-40b2-97eb-4df8cd8a51db] Running
	I1212 00:33:53.847221  278144 system_pods.go:89] "kube-proxy-httpr" [d6220e54-3a3a-4fbe-94e1-d0117757204a] Running
	I1212 00:33:53.847226  278144 system_pods.go:89] "kube-scheduler-embed-certs-858659" [2c4596a5-b170-4c02-93ef-32fac96600c7] Running
	I1212 00:33:53.847233  278144 system_pods.go:89] "storage-provisioner" [1e3f7607-a2f4-4ca4-84c0-8cffb038ee03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:53.847255  278144 retry.go:31] will retry after 242.15138ms: missing components: kube-dns
	I1212 00:33:54.092517  278144 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:54.092553  278144 system_pods.go:89] "coredns-66bc5c9577-8x66p" [1e3ac279-c897-4100-aa49-a94ed95d1b5a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:54.092562  278144 system_pods.go:89] "etcd-embed-certs-858659" [a6f85bfe-d87e-4cd1-a34a-6517b835d825] Running
	I1212 00:33:54.092571  278144 system_pods.go:89] "kindnet-9jvdg" [295eca47-46bb-43bf-981b-7320ba579410] Running
	I1212 00:33:54.092577  278144 system_pods.go:89] "kube-apiserver-embed-certs-858659" [b0ed7f59-ed08-4325-a71e-b6fc09d2f588] Running
	I1212 00:33:54.092588  278144 system_pods.go:89] "kube-controller-manager-embed-certs-858659" [c5be6475-d2da-40b2-97eb-4df8cd8a51db] Running
	I1212 00:33:54.092597  278144 system_pods.go:89] "kube-proxy-httpr" [d6220e54-3a3a-4fbe-94e1-d0117757204a] Running
	I1212 00:33:54.092603  278144 system_pods.go:89] "kube-scheduler-embed-certs-858659" [2c4596a5-b170-4c02-93ef-32fac96600c7] Running
	I1212 00:33:54.092614  278144 system_pods.go:89] "storage-provisioner" [1e3f7607-a2f4-4ca4-84c0-8cffb038ee03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:54.092635  278144 retry.go:31] will retry after 275.013468ms: missing components: kube-dns
	I1212 00:33:54.371511  278144 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:54.371568  278144 system_pods.go:89] "coredns-66bc5c9577-8x66p" [1e3ac279-c897-4100-aa49-a94ed95d1b5a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:33:54.371583  278144 system_pods.go:89] "etcd-embed-certs-858659" [a6f85bfe-d87e-4cd1-a34a-6517b835d825] Running
	I1212 00:33:54.371592  278144 system_pods.go:89] "kindnet-9jvdg" [295eca47-46bb-43bf-981b-7320ba579410] Running
	I1212 00:33:54.371596  278144 system_pods.go:89] "kube-apiserver-embed-certs-858659" [b0ed7f59-ed08-4325-a71e-b6fc09d2f588] Running
	I1212 00:33:54.371600  278144 system_pods.go:89] "kube-controller-manager-embed-certs-858659" [c5be6475-d2da-40b2-97eb-4df8cd8a51db] Running
	I1212 00:33:54.371606  278144 system_pods.go:89] "kube-proxy-httpr" [d6220e54-3a3a-4fbe-94e1-d0117757204a] Running
	I1212 00:33:54.371610  278144 system_pods.go:89] "kube-scheduler-embed-certs-858659" [2c4596a5-b170-4c02-93ef-32fac96600c7] Running
	I1212 00:33:54.371615  278144 system_pods.go:89] "storage-provisioner" [1e3f7607-a2f4-4ca4-84c0-8cffb038ee03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:33:54.371631  278144 retry.go:31] will retry after 370.876841ms: missing components: kube-dns
	I1212 00:33:54.745852  278144 system_pods.go:86] 8 kube-system pods found
	I1212 00:33:54.745875  278144 system_pods.go:89] "coredns-66bc5c9577-8x66p" [1e3ac279-c897-4100-aa49-a94ed95d1b5a] Running
	I1212 00:33:54.745880  278144 system_pods.go:89] "etcd-embed-certs-858659" [a6f85bfe-d87e-4cd1-a34a-6517b835d825] Running
	I1212 00:33:54.745884  278144 system_pods.go:89] "kindnet-9jvdg" [295eca47-46bb-43bf-981b-7320ba579410] Running
	I1212 00:33:54.745887  278144 system_pods.go:89] "kube-apiserver-embed-certs-858659" [b0ed7f59-ed08-4325-a71e-b6fc09d2f588] Running
	I1212 00:33:54.745891  278144 system_pods.go:89] "kube-controller-manager-embed-certs-858659" [c5be6475-d2da-40b2-97eb-4df8cd8a51db] Running
	I1212 00:33:54.745894  278144 system_pods.go:89] "kube-proxy-httpr" [d6220e54-3a3a-4fbe-94e1-d0117757204a] Running
	I1212 00:33:54.745898  278144 system_pods.go:89] "kube-scheduler-embed-certs-858659" [2c4596a5-b170-4c02-93ef-32fac96600c7] Running
	I1212 00:33:54.745901  278144 system_pods.go:89] "storage-provisioner" [1e3f7607-a2f4-4ca4-84c0-8cffb038ee03] Running
	I1212 00:33:54.745908  278144 system_pods.go:126] duration metric: took 904.217774ms to wait for k8s-apps to be running ...
	I1212 00:33:54.745915  278144 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:33:54.745955  278144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:33:54.758005  278144 system_svc.go:56] duration metric: took 12.070781ms WaitForService to wait for kubelet
	I1212 00:33:54.758030  278144 kubeadm.go:587] duration metric: took 12.197901099s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:33:54.758047  278144 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:33:54.760339  278144 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:33:54.760360  278144 node_conditions.go:123] node cpu capacity is 8
	I1212 00:33:54.760381  278144 node_conditions.go:105] duration metric: took 2.327926ms to run NodePressure ...
	I1212 00:33:54.760391  278144 start.go:242] waiting for startup goroutines ...
	I1212 00:33:54.760406  278144 start.go:247] waiting for cluster config update ...
	I1212 00:33:54.760415  278144 start.go:256] writing updated cluster config ...
	I1212 00:33:54.760697  278144 ssh_runner.go:195] Run: rm -f paused
	I1212 00:33:54.764028  278144 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:33:54.767297  278144 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8x66p" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:54.771207  278144 pod_ready.go:94] pod "coredns-66bc5c9577-8x66p" is "Ready"
	I1212 00:33:54.771229  278144 pod_ready.go:86] duration metric: took 3.910409ms for pod "coredns-66bc5c9577-8x66p" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:54.773048  278144 pod_ready.go:83] waiting for pod "etcd-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:54.776421  278144 pod_ready.go:94] pod "etcd-embed-certs-858659" is "Ready"
	I1212 00:33:54.776441  278144 pod_ready.go:86] duration metric: took 3.378121ms for pod "etcd-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:54.778016  278144 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:54.781410  278144 pod_ready.go:94] pod "kube-apiserver-embed-certs-858659" is "Ready"
	I1212 00:33:54.781430  278144 pod_ready.go:86] duration metric: took 3.398425ms for pod "kube-apiserver-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:54.783208  278144 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:55.167800  278144 pod_ready.go:94] pod "kube-controller-manager-embed-certs-858659" is "Ready"
	I1212 00:33:55.167824  278144 pod_ready.go:86] duration metric: took 384.599339ms for pod "kube-controller-manager-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:55.368297  278144 pod_ready.go:83] waiting for pod "kube-proxy-httpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:55.768134  278144 pod_ready.go:94] pod "kube-proxy-httpr" is "Ready"
	I1212 00:33:55.768162  278144 pod_ready.go:86] duration metric: took 399.83752ms for pod "kube-proxy-httpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:55.968342  278144 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:56.368320  278144 pod_ready.go:94] pod "kube-scheduler-embed-certs-858659" is "Ready"
	I1212 00:33:56.368343  278144 pod_ready.go:86] duration metric: took 399.980244ms for pod "kube-scheduler-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:33:56.368354  278144 pod_ready.go:40] duration metric: took 1.604307212s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:33:56.411018  278144 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 00:33:56.412586  278144 out.go:179] * Done! kubectl is now configured to use "embed-certs-858659" cluster and "default" namespace by default
	I1212 00:33:54.081853  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:54.082278  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:54.082338  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:33:54.082396  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:33:54.109118  263844 cri.go:89] found id: "e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:33:54.109136  263844 cri.go:89] found id: ""
	I1212 00:33:54.109143  263844 logs.go:282] 1 containers: [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106]
	I1212 00:33:54.109189  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:54.112861  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:33:54.112918  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:33:54.136832  263844 cri.go:89] found id: ""
	I1212 00:33:54.136852  263844 logs.go:282] 0 containers: []
	W1212 00:33:54.136859  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:33:54.136867  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:33:54.136914  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:33:54.160741  263844 cri.go:89] found id: ""
	I1212 00:33:54.160759  263844 logs.go:282] 0 containers: []
	W1212 00:33:54.160765  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:33:54.160770  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:33:54.160817  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:33:54.185009  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:33:54.185024  263844 cri.go:89] found id: ""
	I1212 00:33:54.185030  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:33:54.185076  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:54.188403  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:33:54.188458  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:33:54.212913  263844 cri.go:89] found id: ""
	I1212 00:33:54.212937  263844 logs.go:282] 0 containers: []
	W1212 00:33:54.212946  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:33:54.212955  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:33:54.213016  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:33:54.236926  263844 cri.go:89] found id: "b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0"
	I1212 00:33:54.236948  263844 cri.go:89] found id: ""
	I1212 00:33:54.236957  263844 logs.go:282] 1 containers: [b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0]
	I1212 00:33:54.237002  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:54.240484  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:33:54.240539  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:33:54.264018  263844 cri.go:89] found id: ""
	I1212 00:33:54.264038  263844 logs.go:282] 0 containers: []
	W1212 00:33:54.264045  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:33:54.264050  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:33:54.264094  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:33:54.287159  263844 cri.go:89] found id: ""
	I1212 00:33:54.287190  263844 logs.go:282] 0 containers: []
	W1212 00:33:54.287198  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:33:54.287207  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:33:54.287221  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:33:54.339388  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:33:54.339411  263844 logs.go:123] Gathering logs for kube-apiserver [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106] ...
	I1212 00:33:54.339424  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:33:54.367730  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:33:54.367753  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:33:54.392578  263844 logs.go:123] Gathering logs for kube-controller-manager [b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0] ...
	I1212 00:33:54.392603  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0"
	I1212 00:33:54.417662  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:33:54.417688  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:33:54.460422  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:33:54.460446  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:33:54.492185  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:33:54.492208  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:33:54.570354  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:33:54.570399  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:33:57.088420  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:33:57.088822  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:33:57.088876  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:33:57.088951  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:33:57.115694  263844 cri.go:89] found id: "e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:33:57.115712  263844 cri.go:89] found id: ""
	I1212 00:33:57.115719  263844 logs.go:282] 1 containers: [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106]
	I1212 00:33:57.115770  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:57.119463  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:33:57.119538  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:33:57.143257  263844 cri.go:89] found id: ""
	I1212 00:33:57.143280  263844 logs.go:282] 0 containers: []
	W1212 00:33:57.143289  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:33:57.143295  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:33:57.143358  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:33:57.168170  263844 cri.go:89] found id: ""
	I1212 00:33:57.168196  263844 logs.go:282] 0 containers: []
	W1212 00:33:57.168205  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:33:57.168212  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:33:57.168266  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:33:57.193689  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:33:57.193707  263844 cri.go:89] found id: ""
	I1212 00:33:57.193714  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:33:57.193758  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:57.197099  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:33:57.197161  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:33:57.221361  263844 cri.go:89] found id: ""
	I1212 00:33:57.221384  263844 logs.go:282] 0 containers: []
	W1212 00:33:57.221394  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:33:57.221401  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:33:57.221451  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:33:57.245440  263844 cri.go:89] found id: "b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0"
	I1212 00:33:57.245454  263844 cri.go:89] found id: ""
	I1212 00:33:57.245461  263844 logs.go:282] 1 containers: [b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0]
	I1212 00:33:57.245525  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:33:57.248998  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:33:57.249054  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:33:57.273656  263844 cri.go:89] found id: ""
	I1212 00:33:57.273677  263844 logs.go:282] 0 containers: []
	W1212 00:33:57.273686  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:33:57.273693  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:33:57.273740  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:33:57.297272  263844 cri.go:89] found id: ""
	I1212 00:33:57.297291  263844 logs.go:282] 0 containers: []
	W1212 00:33:57.297300  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:33:57.297313  263844 logs.go:123] Gathering logs for kube-controller-manager [b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0] ...
	I1212 00:33:57.297327  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0"
	I1212 00:33:57.321602  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:33:57.321625  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:33:57.358356  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:33:57.358378  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:33:57.391850  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:33:57.391873  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:33:57.465900  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:33:57.465933  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:33:57.482552  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:33:57.482575  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:33:57.539409  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:33:57.539433  263844 logs.go:123] Gathering logs for kube-apiserver [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106] ...
	I1212 00:33:57.539449  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:33:57.572371  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:33:57.572396  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:00.099360  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:00.099781  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:34:00.099841  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:00.099904  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:00.125579  263844 cri.go:89] found id: "e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:34:00.125603  263844 cri.go:89] found id: ""
	I1212 00:34:00.125613  263844 logs.go:282] 1 containers: [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106]
	I1212 00:34:00.125659  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:00.129180  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:00.129232  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:00.154028  263844 cri.go:89] found id: ""
	I1212 00:34:00.154048  263844 logs.go:282] 0 containers: []
	W1212 00:34:00.154056  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:00.154061  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:00.154114  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:00.178057  263844 cri.go:89] found id: ""
	I1212 00:34:00.178076  263844 logs.go:282] 0 containers: []
	W1212 00:34:00.178083  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:00.178088  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:00.178137  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:00.201555  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:00.201571  263844 cri.go:89] found id: ""
	I1212 00:34:00.201577  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:00.201630  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:00.205057  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:00.205113  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:00.228239  263844 cri.go:89] found id: ""
	I1212 00:34:00.228262  263844 logs.go:282] 0 containers: []
	W1212 00:34:00.228272  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:00.228279  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:00.228322  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:34:00.252725  263844 cri.go:89] found id: "b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0"
	I1212 00:34:00.252745  263844 cri.go:89] found id: ""
	I1212 00:34:00.252753  263844 logs.go:282] 1 containers: [b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0]
	I1212 00:34:00.252800  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:00.256173  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:34:00.256221  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:34:00.279836  263844 cri.go:89] found id: ""
	I1212 00:34:00.279856  263844 logs.go:282] 0 containers: []
	W1212 00:34:00.279866  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:34:00.279874  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:34:00.279916  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:34:00.302579  263844 cri.go:89] found id: ""
	I1212 00:34:00.302599  263844 logs.go:282] 0 containers: []
	W1212 00:34:00.302608  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:34:00.302620  263844 logs.go:123] Gathering logs for kube-controller-manager [b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0] ...
	I1212 00:34:00.302630  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0be5bd50d86db1b4cb8bfc90871226c93ab4783403a91785bb170f7719163b0"
	I1212 00:34:00.325551  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:00.325581  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:00.364187  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:00.364212  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:00.392031  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:34:00.392060  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:34:00.451352  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:00.451378  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:00.464584  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:00.464602  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:00.517728  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:00.517746  263844 logs.go:123] Gathering logs for kube-apiserver [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106] ...
	I1212 00:34:00.517759  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:34:00.546538  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:00.546566  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	
	
	==> CRI-O <==
	Dec 12 00:33:53 embed-certs-858659 crio[774]: time="2025-12-12T00:33:53.741569152Z" level=info msg="Starting container: bde30ae704f49e684100101da2e34a19f94d576babf4262e6c5b1cf8883d22f4" id=6aa63c58-492a-49ed-820f-254f82128ba8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:33:53 embed-certs-858659 crio[774]: time="2025-12-12T00:33:53.74349734Z" level=info msg="Started container" PID=1884 containerID=bde30ae704f49e684100101da2e34a19f94d576babf4262e6c5b1cf8883d22f4 description=kube-system/coredns-66bc5c9577-8x66p/coredns id=6aa63c58-492a-49ed-820f-254f82128ba8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee33231ca2679862af2fdd712f09ae86fa513d9b9a7a593c795e9b0c297a398a
	Dec 12 00:33:56 embed-certs-858659 crio[774]: time="2025-12-12T00:33:56.883251589Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4618b115-d093-48d3-95ba-bd707262e856 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:33:56 embed-certs-858659 crio[774]: time="2025-12-12T00:33:56.883333782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:33:56 embed-certs-858659 crio[774]: time="2025-12-12T00:33:56.890051168Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:19932d2d2e8d44e75e47ee75635a2b170f2ae2d6a9c87993f90b77a1f5f77064 UID:f82842ad-b3b7-41c5-a1cf-a78ae8f92ea1 NetNS:/var/run/netns/53580967-a9c0-42c4-9626-a8d555b60f77 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002a4638}] Aliases:map[]}"
	Dec 12 00:33:56 embed-certs-858659 crio[774]: time="2025-12-12T00:33:56.890087576Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 12 00:33:56 embed-certs-858659 crio[774]: time="2025-12-12T00:33:56.901603196Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:19932d2d2e8d44e75e47ee75635a2b170f2ae2d6a9c87993f90b77a1f5f77064 UID:f82842ad-b3b7-41c5-a1cf-a78ae8f92ea1 NetNS:/var/run/netns/53580967-a9c0-42c4-9626-a8d555b60f77 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002a4638}] Aliases:map[]}"
	Dec 12 00:33:56 embed-certs-858659 crio[774]: time="2025-12-12T00:33:56.901791333Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 12 00:33:56 embed-certs-858659 crio[774]: time="2025-12-12T00:33:56.902739937Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 00:33:56 embed-certs-858659 crio[774]: time="2025-12-12T00:33:56.90349411Z" level=info msg="Ran pod sandbox 19932d2d2e8d44e75e47ee75635a2b170f2ae2d6a9c87993f90b77a1f5f77064 with infra container: default/busybox/POD" id=4618b115-d093-48d3-95ba-bd707262e856 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:33:56 embed-certs-858659 crio[774]: time="2025-12-12T00:33:56.904732181Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=414aff70-c93a-4da9-add8-d9344c08deb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:33:56 embed-certs-858659 crio[774]: time="2025-12-12T00:33:56.904877511Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=414aff70-c93a-4da9-add8-d9344c08deb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:33:56 embed-certs-858659 crio[774]: time="2025-12-12T00:33:56.904924188Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=414aff70-c93a-4da9-add8-d9344c08deb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:33:56 embed-certs-858659 crio[774]: time="2025-12-12T00:33:56.905796333Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c215bab6-03c1-406c-bbb5-0d852d44c809 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:33:56 embed-certs-858659 crio[774]: time="2025-12-12T00:33:56.908016881Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 12 00:33:57 embed-certs-858659 crio[774]: time="2025-12-12T00:33:57.514300572Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c215bab6-03c1-406c-bbb5-0d852d44c809 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:33:57 embed-certs-858659 crio[774]: time="2025-12-12T00:33:57.514894875Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9305789c-b140-46aa-a24d-86b11e6d1b9e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:33:57 embed-certs-858659 crio[774]: time="2025-12-12T00:33:57.51607227Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1a885ec0-631f-4573-8725-eaee68ac394f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:33:57 embed-certs-858659 crio[774]: time="2025-12-12T00:33:57.51908995Z" level=info msg="Creating container: default/busybox/busybox" id=14c33b05-3122-4999-bd4a-fee33d2508e1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:33:57 embed-certs-858659 crio[774]: time="2025-12-12T00:33:57.519205001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:33:57 embed-certs-858659 crio[774]: time="2025-12-12T00:33:57.522601019Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:33:57 embed-certs-858659 crio[774]: time="2025-12-12T00:33:57.523008925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:33:57 embed-certs-858659 crio[774]: time="2025-12-12T00:33:57.550667293Z" level=info msg="Created container 26bb116b62c4723d9183afaa6449e3fe7cbdfc517091a23a1dd39191e4ca24b4: default/busybox/busybox" id=14c33b05-3122-4999-bd4a-fee33d2508e1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:33:57 embed-certs-858659 crio[774]: time="2025-12-12T00:33:57.551200826Z" level=info msg="Starting container: 26bb116b62c4723d9183afaa6449e3fe7cbdfc517091a23a1dd39191e4ca24b4" id=b67ae219-ff44-49c4-aa74-3d394dfffe85 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:33:57 embed-certs-858659 crio[774]: time="2025-12-12T00:33:57.553224511Z" level=info msg="Started container" PID=1964 containerID=26bb116b62c4723d9183afaa6449e3fe7cbdfc517091a23a1dd39191e4ca24b4 description=default/busybox/busybox id=b67ae219-ff44-49c4-aa74-3d394dfffe85 name=/runtime.v1.RuntimeService/StartContainer sandboxID=19932d2d2e8d44e75e47ee75635a2b170f2ae2d6a9c87993f90b77a1f5f77064
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	26bb116b62c47       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   19932d2d2e8d4       busybox                                      default
	bde30ae704f49       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   ee33231ca2679       coredns-66bc5c9577-8x66p                     kube-system
	6a5067d2ba573       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   e314fc16bb752       storage-provisioner                          kube-system
	9389eb805d925       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   93efd243e536a       kindnet-9jvdg                                kube-system
	ea98024ea4063       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      22 seconds ago      Running             kube-proxy                0                   e789391ce98ec       kube-proxy-httpr                             kube-system
	61938e3b490a2       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      32 seconds ago      Running             kube-apiserver            0                   77a7528f25108       kube-apiserver-embed-certs-858659            kube-system
	31ed298854496       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      32 seconds ago      Running             kube-scheduler            0                   07962faabd71c       kube-scheduler-embed-certs-858659            kube-system
	669109adcde45       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      32 seconds ago      Running             etcd                      0                   9c5463955d369       etcd-embed-certs-858659                      kube-system
	5f21cd6a2d21d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      32 seconds ago      Running             kube-controller-manager   0                   031695d5d2473       kube-controller-manager-embed-certs-858659   kube-system
	
	
	==> coredns [bde30ae704f49e684100101da2e34a19f94d576babf4262e6c5b1cf8883d22f4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49706 - 1119 "HINFO IN 1944187067972960859.756805504446204930. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.063520833s
	
	
	==> describe nodes <==
	Name:               embed-certs-858659
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-858659
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=embed-certs-858659
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_33_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:33:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-858659
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:33:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:33:56 +0000   Fri, 12 Dec 2025 00:33:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:33:56 +0000   Fri, 12 Dec 2025 00:33:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:33:56 +0000   Fri, 12 Dec 2025 00:33:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:33:56 +0000   Fri, 12 Dec 2025 00:33:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-858659
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                116d1391-d680-420b-9323-ddc7dc668b8a
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 coredns-66bc5c9577-8x66p                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     22s
	  kube-system                 etcd-embed-certs-858659                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-9jvdg                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-embed-certs-858659             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-embed-certs-858659    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-httpr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-embed-certs-858659             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s (x8 over 33s)  kubelet          Node embed-certs-858659 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x8 over 33s)  kubelet          Node embed-certs-858659 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x8 over 33s)  kubelet          Node embed-certs-858659 status is now: NodeHasSufficientPID
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s                kubelet          Node embed-certs-858659 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s                kubelet          Node embed-certs-858659 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s                kubelet          Node embed-certs-858659 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23s                node-controller  Node embed-certs-858659 event: Registered Node embed-certs-858659 in Controller
	  Normal  NodeReady                11s                kubelet          Node embed-certs-858659 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [669109adcde45fc8c806a76adda5854b31b9e0566f500512e0cc83d14f33dd89] <==
	{"level":"warn","ts":"2025-12-12T00:33:33.483642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.492672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.501163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.509292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.523614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.530889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.539542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.549095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.555487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.563582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.571833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.579174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.585405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.592822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.599541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.606764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.615005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.622650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.630166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.636438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.643206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.664269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.670833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.677437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:33:33.722989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41296","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:34:05 up  1:16,  0 user,  load average: 2.71, 2.46, 1.67
	Linux embed-certs-858659 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9389eb805d925ff399e27d9461f8ff237e2d948425593305e2415c01c84f316c] <==
	I1212 00:33:42.905061       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 00:33:42.905334       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1212 00:33:42.905452       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:33:42.905467       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:33:42.905499       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:33:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:33:43.106824       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:33:43.106897       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:33:43.106912       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:33:43.107775       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 00:33:43.507793       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:33:43.507815       1 metrics.go:72] Registering metrics
	I1212 00:33:43.507866       1 controller.go:711] "Syncing nftables rules"
	I1212 00:33:53.108771       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 00:33:53.108821       1 main.go:301] handling current node
	I1212 00:34:03.110650       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 00:34:03.110695       1 main.go:301] handling current node
	
	
	==> kube-apiserver [61938e3b490a29b81b05b39a1c720ddbd6e2a41e76bc757c0c9926fa31ed7d6f] <==
	I1212 00:33:34.231891       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:33:34.232595       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:33:34.232869       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1212 00:33:34.238095       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:33:34.238263       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 00:33:34.250102       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 00:33:34.269645       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:33:35.125016       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 00:33:35.128262       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 00:33:35.128280       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 00:33:35.521578       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:33:35.551251       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:33:35.625936       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 00:33:35.630724       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1212 00:33:35.631450       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 00:33:35.635546       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:33:36.229930       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:33:36.595342       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:33:36.603355       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 00:33:36.609410       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 00:33:42.179025       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:33:42.182245       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:33:42.277779       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 00:33:42.327245       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1212 00:34:03.660102       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:54138: use of closed network connection
	
	
	==> kube-controller-manager [5f21cd6a2d21d9313ca30bd1975165998272968a7338d8e1c7831bc23fec76b0] <==
	I1212 00:33:41.184941       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 00:33:41.184974       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1212 00:33:41.225958       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 00:33:41.225979       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1212 00:33:41.225958       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1212 00:33:41.225986       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 00:33:41.225982       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1212 00:33:41.226117       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 00:33:41.226128       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 00:33:41.225959       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1212 00:33:41.226352       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 00:33:41.226417       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 00:33:41.226432       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 00:33:41.230210       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 00:33:41.230229       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1212 00:33:41.230303       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1212 00:33:41.230364       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1212 00:33:41.230377       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1212 00:33:41.230384       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1212 00:33:41.233529       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 00:33:41.234415       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1212 00:33:41.236101       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-858659" podCIDRs=["10.244.0.0/24"]
	I1212 00:33:41.246082       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 00:33:41.247131       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1212 00:33:56.165865       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ea98024ea40633d15907625d613d1f46bfd9bec1d0cf025fd85077c43ccc9485] <==
	I1212 00:33:42.783172       1 server_linux.go:53] "Using iptables proxy"
	I1212 00:33:42.842667       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 00:33:42.943031       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 00:33:42.943062       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1212 00:33:42.943143       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:33:42.960747       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:33:42.960792       1 server_linux.go:132] "Using iptables Proxier"
	I1212 00:33:42.967371       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:33:42.967863       1 server.go:527] "Version info" version="v1.34.2"
	I1212 00:33:42.967896       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:33:42.970587       1 config.go:200] "Starting service config controller"
	I1212 00:33:42.970660       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:33:42.970758       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:33:42.970778       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:33:42.970800       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:33:42.970812       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:33:42.970817       1 config.go:309] "Starting node config controller"
	I1212 00:33:42.970833       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:33:42.970841       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 00:33:43.070958       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 00:33:43.071043       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 00:33:43.071083       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [31ed298854496eb33e49b96dfc509c2ea0fef9881138065d2e37d339d0283f33] <==
	I1212 00:33:34.802758       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:33:34.804859       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:33:34.804889       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:33:34.805120       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 00:33:34.805185       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1212 00:33:34.806238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1212 00:33:34.806378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 00:33:34.806569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 00:33:34.806953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 00:33:34.807137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 00:33:34.807327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 00:33:34.808015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 00:33:34.808028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 00:33:34.808156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 00:33:34.808162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 00:33:34.808204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 00:33:34.808363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 00:33:34.808712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 00:33:34.808730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 00:33:34.808707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 00:33:34.808818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 00:33:34.808883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 00:33:34.808892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 00:33:34.808979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1212 00:33:35.905909       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 00:33:37 embed-certs-858659 kubelet[1344]: I1212 00:33:37.450119    1344 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-embed-certs-858659"
	Dec 12 00:33:37 embed-certs-858659 kubelet[1344]: E1212 00:33:37.457505    1344 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-embed-certs-858659\" already exists" pod="kube-system/kube-scheduler-embed-certs-858659"
	Dec 12 00:33:37 embed-certs-858659 kubelet[1344]: I1212 00:33:37.465067    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-858659" podStartSLOduration=1.4650526529999999 podStartE2EDuration="1.465052653s" podCreationTimestamp="2025-12-12 00:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:33:37.457583224 +0000 UTC m=+1.100792913" watchObservedRunningTime="2025-12-12 00:33:37.465052653 +0000 UTC m=+1.108262320"
	Dec 12 00:33:37 embed-certs-858659 kubelet[1344]: I1212 00:33:37.474400    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-858659" podStartSLOduration=1.474367977 podStartE2EDuration="1.474367977s" podCreationTimestamp="2025-12-12 00:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:33:37.465129739 +0000 UTC m=+1.108339407" watchObservedRunningTime="2025-12-12 00:33:37.474367977 +0000 UTC m=+1.117577629"
	Dec 12 00:33:41 embed-certs-858659 kubelet[1344]: I1212 00:33:41.244291    1344 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 12 00:33:41 embed-certs-858659 kubelet[1344]: I1212 00:33:41.245093    1344 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 12 00:33:42 embed-certs-858659 kubelet[1344]: I1212 00:33:42.358063    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6220e54-3a3a-4fbe-94e1-d0117757204a-xtables-lock\") pod \"kube-proxy-httpr\" (UID: \"d6220e54-3a3a-4fbe-94e1-d0117757204a\") " pod="kube-system/kube-proxy-httpr"
	Dec 12 00:33:42 embed-certs-858659 kubelet[1344]: I1212 00:33:42.358100    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ntpg\" (UniqueName: \"kubernetes.io/projected/d6220e54-3a3a-4fbe-94e1-d0117757204a-kube-api-access-9ntpg\") pod \"kube-proxy-httpr\" (UID: \"d6220e54-3a3a-4fbe-94e1-d0117757204a\") " pod="kube-system/kube-proxy-httpr"
	Dec 12 00:33:42 embed-certs-858659 kubelet[1344]: I1212 00:33:42.358119    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/295eca47-46bb-43bf-981b-7320ba579410-lib-modules\") pod \"kindnet-9jvdg\" (UID: \"295eca47-46bb-43bf-981b-7320ba579410\") " pod="kube-system/kindnet-9jvdg"
	Dec 12 00:33:42 embed-certs-858659 kubelet[1344]: I1212 00:33:42.358149    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d6220e54-3a3a-4fbe-94e1-d0117757204a-kube-proxy\") pod \"kube-proxy-httpr\" (UID: \"d6220e54-3a3a-4fbe-94e1-d0117757204a\") " pod="kube-system/kube-proxy-httpr"
	Dec 12 00:33:42 embed-certs-858659 kubelet[1344]: I1212 00:33:42.358171    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6220e54-3a3a-4fbe-94e1-d0117757204a-lib-modules\") pod \"kube-proxy-httpr\" (UID: \"d6220e54-3a3a-4fbe-94e1-d0117757204a\") " pod="kube-system/kube-proxy-httpr"
	Dec 12 00:33:42 embed-certs-858659 kubelet[1344]: I1212 00:33:42.358191    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/295eca47-46bb-43bf-981b-7320ba579410-xtables-lock\") pod \"kindnet-9jvdg\" (UID: \"295eca47-46bb-43bf-981b-7320ba579410\") " pod="kube-system/kindnet-9jvdg"
	Dec 12 00:33:42 embed-certs-858659 kubelet[1344]: I1212 00:33:42.358212    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8pv5\" (UniqueName: \"kubernetes.io/projected/295eca47-46bb-43bf-981b-7320ba579410-kube-api-access-w8pv5\") pod \"kindnet-9jvdg\" (UID: \"295eca47-46bb-43bf-981b-7320ba579410\") " pod="kube-system/kindnet-9jvdg"
	Dec 12 00:33:42 embed-certs-858659 kubelet[1344]: I1212 00:33:42.358238    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/295eca47-46bb-43bf-981b-7320ba579410-cni-cfg\") pod \"kindnet-9jvdg\" (UID: \"295eca47-46bb-43bf-981b-7320ba579410\") " pod="kube-system/kindnet-9jvdg"
	Dec 12 00:33:43 embed-certs-858659 kubelet[1344]: I1212 00:33:43.471666    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9jvdg" podStartSLOduration=1.471645202 podStartE2EDuration="1.471645202s" podCreationTimestamp="2025-12-12 00:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:33:43.471554674 +0000 UTC m=+7.114764343" watchObservedRunningTime="2025-12-12 00:33:43.471645202 +0000 UTC m=+7.114854871"
	Dec 12 00:33:43 embed-certs-858659 kubelet[1344]: I1212 00:33:43.480156    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-httpr" podStartSLOduration=1.480137427 podStartE2EDuration="1.480137427s" podCreationTimestamp="2025-12-12 00:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:33:43.480122083 +0000 UTC m=+7.123331771" watchObservedRunningTime="2025-12-12 00:33:43.480137427 +0000 UTC m=+7.123347100"
	Dec 12 00:33:53 embed-certs-858659 kubelet[1344]: I1212 00:33:53.365893    1344 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 12 00:33:53 embed-certs-858659 kubelet[1344]: I1212 00:33:53.431915    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qt8n\" (UniqueName: \"kubernetes.io/projected/1e3f7607-a2f4-4ca4-84c0-8cffb038ee03-kube-api-access-5qt8n\") pod \"storage-provisioner\" (UID: \"1e3f7607-a2f4-4ca4-84c0-8cffb038ee03\") " pod="kube-system/storage-provisioner"
	Dec 12 00:33:53 embed-certs-858659 kubelet[1344]: I1212 00:33:53.431950    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1e3f7607-a2f4-4ca4-84c0-8cffb038ee03-tmp\") pod \"storage-provisioner\" (UID: \"1e3f7607-a2f4-4ca4-84c0-8cffb038ee03\") " pod="kube-system/storage-provisioner"
	Dec 12 00:33:53 embed-certs-858659 kubelet[1344]: I1212 00:33:53.431968    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e3ac279-c897-4100-aa49-a94ed95d1b5a-config-volume\") pod \"coredns-66bc5c9577-8x66p\" (UID: \"1e3ac279-c897-4100-aa49-a94ed95d1b5a\") " pod="kube-system/coredns-66bc5c9577-8x66p"
	Dec 12 00:33:53 embed-certs-858659 kubelet[1344]: I1212 00:33:53.431982    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qg97\" (UniqueName: \"kubernetes.io/projected/1e3ac279-c897-4100-aa49-a94ed95d1b5a-kube-api-access-2qg97\") pod \"coredns-66bc5c9577-8x66p\" (UID: \"1e3ac279-c897-4100-aa49-a94ed95d1b5a\") " pod="kube-system/coredns-66bc5c9577-8x66p"
	Dec 12 00:33:54 embed-certs-858659 kubelet[1344]: I1212 00:33:54.495254    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8x66p" podStartSLOduration=12.495232885 podStartE2EDuration="12.495232885s" podCreationTimestamp="2025-12-12 00:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:33:54.49484616 +0000 UTC m=+18.138055836" watchObservedRunningTime="2025-12-12 00:33:54.495232885 +0000 UTC m=+18.138442557"
	Dec 12 00:33:54 embed-certs-858659 kubelet[1344]: I1212 00:33:54.505822    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.505803374 podStartE2EDuration="11.505803374s" podCreationTimestamp="2025-12-12 00:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:33:54.505734302 +0000 UTC m=+18.148943973" watchObservedRunningTime="2025-12-12 00:33:54.505803374 +0000 UTC m=+18.149013043"
	Dec 12 00:33:56 embed-certs-858659 kubelet[1344]: I1212 00:33:56.649413    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj96d\" (UniqueName: \"kubernetes.io/projected/f82842ad-b3b7-41c5-a1cf-a78ae8f92ea1-kube-api-access-vj96d\") pod \"busybox\" (UID: \"f82842ad-b3b7-41c5-a1cf-a78ae8f92ea1\") " pod="default/busybox"
	Dec 12 00:33:58 embed-certs-858659 kubelet[1344]: I1212 00:33:58.503136    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.892758454 podStartE2EDuration="2.503116039s" podCreationTimestamp="2025-12-12 00:33:56 +0000 UTC" firstStartedPulling="2025-12-12 00:33:56.905287844 +0000 UTC m=+20.548497493" lastFinishedPulling="2025-12-12 00:33:57.51564543 +0000 UTC m=+21.158855078" observedRunningTime="2025-12-12 00:33:58.503067068 +0000 UTC m=+22.146276737" watchObservedRunningTime="2025-12-12 00:33:58.503116039 +0000 UTC m=+22.146325708"
	
	
	==> storage-provisioner [6a5067d2ba57317bdcc8c1d419527e7ae9e103448568d8ef2b3aeb9aa37afd52] <==
	I1212 00:33:53.751836       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:33:53.760589       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:33:53.760641       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1212 00:33:53.762651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:53.766899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 00:33:53.767035       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:33:53.767100       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"78ef122f-55d8-421e-a9ec-895d80aa214b", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-858659_2ed860e4-e4a5-4226-bced-4fba6bcfbe7f became leader
	I1212 00:33:53.767190       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-858659_2ed860e4-e4a5-4226-bced-4fba6bcfbe7f!
	W1212 00:33:53.769275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:53.772664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 00:33:53.867369       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-858659_2ed860e4-e4a5-4226-bced-4fba6bcfbe7f!
	W1212 00:33:55.775840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:55.780174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:57.782535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:57.785950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:59.788459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:33:59.791865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:34:01.794484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:34:01.797964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:34:03.801579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:34:03.807364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-858659 -n embed-certs-858659
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-858659 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-743506 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-743506 --alsologtostderr -v=1: exit status 80 (2.150989578s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-743506 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:34:46.632858  296823 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:34:46.633120  296823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:34:46.633130  296823 out.go:374] Setting ErrFile to fd 2...
	I1212 00:34:46.633134  296823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:34:46.633357  296823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:34:46.633637  296823 out.go:368] Setting JSON to false
	I1212 00:34:46.633657  296823 mustload.go:66] Loading cluster: old-k8s-version-743506
	I1212 00:34:46.634743  296823 config.go:182] Loaded profile config "old-k8s-version-743506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1212 00:34:46.635297  296823 cli_runner.go:164] Run: docker container inspect old-k8s-version-743506 --format={{.State.Status}}
	I1212 00:34:46.652547  296823 host.go:66] Checking if "old-k8s-version-743506" exists ...
	I1212 00:34:46.652794  296823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:34:46.707819  296823 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-12 00:34:46.697307668 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:34:46.708448  296823 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-743506 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1212 00:34:46.710231  296823 out.go:179] * Pausing node old-k8s-version-743506 ... 
	I1212 00:34:46.711152  296823 host.go:66] Checking if "old-k8s-version-743506" exists ...
	I1212 00:34:46.711392  296823 ssh_runner.go:195] Run: systemctl --version
	I1212 00:34:46.711435  296823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743506
	I1212 00:34:46.727718  296823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/old-k8s-version-743506/id_rsa Username:docker}
	I1212 00:34:46.820502  296823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:34:46.832636  296823 pause.go:52] kubelet running: true
	I1212 00:34:46.832720  296823 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:34:46.999892  296823 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:34:47.000021  296823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:34:47.065358  296823 cri.go:89] found id: "590f315a0e41429b9947025cf60b15230faac5f9cb474a9172b52736e8344a73"
	I1212 00:34:47.065380  296823 cri.go:89] found id: "d5c10c053a2306b36bed7e95ea22c18e8c71e4916805105400cca4e715b39675"
	I1212 00:34:47.065384  296823 cri.go:89] found id: "40f0aa5d7111ad953f0cf93f67a62cb204d4fc074605fd5ab577a94dc6d2d0a2"
	I1212 00:34:47.065387  296823 cri.go:89] found id: "46ced247bb6cb9df1846e17c8e1a5267a02c76daf3d5721b9ab21a684f9f59d7"
	I1212 00:34:47.065390  296823 cri.go:89] found id: "17479f6c2196c0e03f5729c15203a934ede701786b535f944214967080179be1"
	I1212 00:34:47.065394  296823 cri.go:89] found id: "a0ad080a093ddeaa725b3aa7bd29e92715f2fa158214966408422908cd7efbce"
	I1212 00:34:47.065397  296823 cri.go:89] found id: "d463fe18198b237fef4bf76765fe49362a2634f1272c155ffbf6c2967f301bf9"
	I1212 00:34:47.065399  296823 cri.go:89] found id: "be598b879266780ddd79387bcfed0dfa8ab737f26c38131f8d1479fbb3247bab"
	I1212 00:34:47.065402  296823 cri.go:89] found id: "e903d75d8008fddf71158194c0a15dd7caaba9ceb3c693696d81f596d545c2af"
	I1212 00:34:47.065409  296823 cri.go:89] found id: "c00ac9a4c0b0a4a61b6f5dcab16513d50e33b4ba5166606551722a4571ca2420"
	I1212 00:34:47.065413  296823 cri.go:89] found id: ""
	I1212 00:34:47.065448  296823 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:34:47.077032  296823 retry.go:31] will retry after 326.619106ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:34:47Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:34:47.404575  296823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:34:47.417230  296823 pause.go:52] kubelet running: false
	I1212 00:34:47.417291  296823 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:34:47.559226  296823 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:34:47.559317  296823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:34:47.627805  296823 cri.go:89] found id: "590f315a0e41429b9947025cf60b15230faac5f9cb474a9172b52736e8344a73"
	I1212 00:34:47.627827  296823 cri.go:89] found id: "d5c10c053a2306b36bed7e95ea22c18e8c71e4916805105400cca4e715b39675"
	I1212 00:34:47.627833  296823 cri.go:89] found id: "40f0aa5d7111ad953f0cf93f67a62cb204d4fc074605fd5ab577a94dc6d2d0a2"
	I1212 00:34:47.627837  296823 cri.go:89] found id: "46ced247bb6cb9df1846e17c8e1a5267a02c76daf3d5721b9ab21a684f9f59d7"
	I1212 00:34:47.627840  296823 cri.go:89] found id: "17479f6c2196c0e03f5729c15203a934ede701786b535f944214967080179be1"
	I1212 00:34:47.627844  296823 cri.go:89] found id: "a0ad080a093ddeaa725b3aa7bd29e92715f2fa158214966408422908cd7efbce"
	I1212 00:34:47.627847  296823 cri.go:89] found id: "d463fe18198b237fef4bf76765fe49362a2634f1272c155ffbf6c2967f301bf9"
	I1212 00:34:47.627849  296823 cri.go:89] found id: "be598b879266780ddd79387bcfed0dfa8ab737f26c38131f8d1479fbb3247bab"
	I1212 00:34:47.627852  296823 cri.go:89] found id: "e903d75d8008fddf71158194c0a15dd7caaba9ceb3c693696d81f596d545c2af"
	I1212 00:34:47.627861  296823 cri.go:89] found id: "c00ac9a4c0b0a4a61b6f5dcab16513d50e33b4ba5166606551722a4571ca2420"
	I1212 00:34:47.627864  296823 cri.go:89] found id: ""
	I1212 00:34:47.627899  296823 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:34:47.640003  296823 retry.go:31] will retry after 280.111302ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:34:47Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:34:47.920420  296823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:34:47.932428  296823 pause.go:52] kubelet running: false
	I1212 00:34:47.932511  296823 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:34:48.074033  296823 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:34:48.074110  296823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:34:48.138112  296823 cri.go:89] found id: "590f315a0e41429b9947025cf60b15230faac5f9cb474a9172b52736e8344a73"
	I1212 00:34:48.138134  296823 cri.go:89] found id: "d5c10c053a2306b36bed7e95ea22c18e8c71e4916805105400cca4e715b39675"
	I1212 00:34:48.138141  296823 cri.go:89] found id: "40f0aa5d7111ad953f0cf93f67a62cb204d4fc074605fd5ab577a94dc6d2d0a2"
	I1212 00:34:48.138145  296823 cri.go:89] found id: "46ced247bb6cb9df1846e17c8e1a5267a02c76daf3d5721b9ab21a684f9f59d7"
	I1212 00:34:48.138149  296823 cri.go:89] found id: "17479f6c2196c0e03f5729c15203a934ede701786b535f944214967080179be1"
	I1212 00:34:48.138154  296823 cri.go:89] found id: "a0ad080a093ddeaa725b3aa7bd29e92715f2fa158214966408422908cd7efbce"
	I1212 00:34:48.138157  296823 cri.go:89] found id: "d463fe18198b237fef4bf76765fe49362a2634f1272c155ffbf6c2967f301bf9"
	I1212 00:34:48.138162  296823 cri.go:89] found id: "be598b879266780ddd79387bcfed0dfa8ab737f26c38131f8d1479fbb3247bab"
	I1212 00:34:48.138166  296823 cri.go:89] found id: "e903d75d8008fddf71158194c0a15dd7caaba9ceb3c693696d81f596d545c2af"
	I1212 00:34:48.138188  296823 cri.go:89] found id: "c00ac9a4c0b0a4a61b6f5dcab16513d50e33b4ba5166606551722a4571ca2420"
	I1212 00:34:48.138196  296823 cri.go:89] found id: ""
	I1212 00:34:48.138243  296823 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:34:48.149643  296823 retry.go:31] will retry after 348.346358ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:34:48Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:34:48.498152  296823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:34:48.510657  296823 pause.go:52] kubelet running: false
	I1212 00:34:48.510704  296823 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:34:48.645009  296823 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:34:48.645084  296823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:34:48.707032  296823 cri.go:89] found id: "590f315a0e41429b9947025cf60b15230faac5f9cb474a9172b52736e8344a73"
	I1212 00:34:48.707051  296823 cri.go:89] found id: "d5c10c053a2306b36bed7e95ea22c18e8c71e4916805105400cca4e715b39675"
	I1212 00:34:48.707055  296823 cri.go:89] found id: "40f0aa5d7111ad953f0cf93f67a62cb204d4fc074605fd5ab577a94dc6d2d0a2"
	I1212 00:34:48.707059  296823 cri.go:89] found id: "46ced247bb6cb9df1846e17c8e1a5267a02c76daf3d5721b9ab21a684f9f59d7"
	I1212 00:34:48.707062  296823 cri.go:89] found id: "17479f6c2196c0e03f5729c15203a934ede701786b535f944214967080179be1"
	I1212 00:34:48.707065  296823 cri.go:89] found id: "a0ad080a093ddeaa725b3aa7bd29e92715f2fa158214966408422908cd7efbce"
	I1212 00:34:48.707068  296823 cri.go:89] found id: "d463fe18198b237fef4bf76765fe49362a2634f1272c155ffbf6c2967f301bf9"
	I1212 00:34:48.707070  296823 cri.go:89] found id: "be598b879266780ddd79387bcfed0dfa8ab737f26c38131f8d1479fbb3247bab"
	I1212 00:34:48.707073  296823 cri.go:89] found id: "e903d75d8008fddf71158194c0a15dd7caaba9ceb3c693696d81f596d545c2af"
	I1212 00:34:48.707083  296823 cri.go:89] found id: "c00ac9a4c0b0a4a61b6f5dcab16513d50e33b4ba5166606551722a4571ca2420"
	I1212 00:34:48.707086  296823 cri.go:89] found id: ""
	I1212 00:34:48.707121  296823 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:34:48.720906  296823 out.go:203] 
	W1212 00:34:48.722054  296823 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:34:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:34:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:34:48.722068  296823 out.go:285] * 
	* 
	W1212 00:34:48.725845  296823 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:34:48.726930  296823 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-743506 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-743506
helpers_test.go:244: (dbg) docker inspect old-k8s-version-743506:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671",
	        "Created": "2025-12-12T00:32:56.81457716Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 287954,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:34:07.678379993Z",
	            "FinishedAt": "2025-12-12T00:34:06.851652039Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671/hosts",
	        "LogPath": "/var/lib/docker/containers/e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671/e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671-json.log",
	        "Name": "/old-k8s-version-743506",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-743506:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-743506",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671",
	                "LowerDir": "/var/lib/docker/overlay2/19303aab7b27fd5dd820b5a2006a30ec77cb6cfa1aae06803e3c9f01f549c5bc-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/19303aab7b27fd5dd820b5a2006a30ec77cb6cfa1aae06803e3c9f01f549c5bc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/19303aab7b27fd5dd820b5a2006a30ec77cb6cfa1aae06803e3c9f01f549c5bc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/19303aab7b27fd5dd820b5a2006a30ec77cb6cfa1aae06803e3c9f01f549c5bc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-743506",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-743506/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-743506",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-743506",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-743506",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a8a5e281f0693fd937fc65881ff362ae373436321f2d0f37d356440a11775c45",
	            "SandboxKey": "/var/run/docker/netns/a8a5e281f069",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-743506": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cdcdaa73e4279f08edc884bf6d2244b7a4df03294612c4ea8561dd87e0d0ec16",
	                    "EndpointID": "20afcfc40aa2874517ac090d98642d677f57e757ff0662907dcd84f48a8823bb",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "d2:29:49:e6:27:6b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-743506",
	                        "e6e7fe2ace92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-743506 -n old-k8s-version-743506
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-743506 -n old-k8s-version-743506: exit status 2 (307.206427ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-743506 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-743506 logs -n 25: (1.002971114s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ delete  │ -p stopped-upgrade-148693                                                                                                                                                                                                                     │ stopped-upgrade-148693 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p cert-options-319518 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-319518    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ start   │ -p cert-expiration-673665 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-673665 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ delete  │ -p running-upgrade-299658                                                                                                                                                                                                                     │ running-upgrade-299658 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ delete  │ -p cert-expiration-673665                                                                                                                                                                                                                     │ cert-expiration-673665 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ cert-options-319518 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-319518    │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ -p cert-options-319518 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-319518    │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ delete  │ -p cert-options-319518                                                                                                                                                                                                                        │ cert-options-319518    │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-858659     │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-743506 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ stop    │ -p old-k8s-version-743506 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable metrics-server -p no-preload-675290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ stop    │ -p no-preload-675290 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-858659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-858659     │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ stop    │ -p embed-certs-858659 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-858659     │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-743506 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p no-preload-675290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-858659 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-858659     │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-858659     │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ image   │ old-k8s-version-743506 image list --format=json                                                                                                                                                                                               │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p old-k8s-version-743506 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:34:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:34:22.194991  292217 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:34:22.195276  292217 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:34:22.195286  292217 out.go:374] Setting ErrFile to fd 2...
	I1212 00:34:22.195290  292217 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:34:22.195461  292217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:34:22.195951  292217 out.go:368] Setting JSON to false
	I1212 00:34:22.197248  292217 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4608,"bootTime":1765495054,"procs":335,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:34:22.197307  292217 start.go:143] virtualization: kvm guest
	I1212 00:34:22.199590  292217 out.go:179] * [embed-certs-858659] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:34:22.200695  292217 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:34:22.200770  292217 notify.go:221] Checking for updates...
	I1212 00:34:22.202938  292217 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:34:22.205663  292217 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:34:22.207024  292217 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:34:22.208159  292217 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:34:22.209335  292217 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:34:22.211048  292217 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:34:22.211868  292217 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:34:22.238149  292217 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:34:22.238284  292217 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:34:22.300345  292217 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-12 00:34:22.28954278 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:34:22.300452  292217 docker.go:319] overlay module found
	I1212 00:34:22.302189  292217 out.go:179] * Using the docker driver based on existing profile
	I1212 00:34:22.303236  292217 start.go:309] selected driver: docker
	I1212 00:34:22.303252  292217 start.go:927] validating driver "docker" against &{Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:34:22.303357  292217 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:34:22.304085  292217 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:34:22.364908  292217 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-12 00:34:22.355220994 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:34:22.365186  292217 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:34:22.365208  292217 cni.go:84] Creating CNI manager for ""
	I1212 00:34:22.365254  292217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:34:22.365282  292217 start.go:353] cluster config:
	{Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:34:22.367044  292217 out.go:179] * Starting "embed-certs-858659" primary control-plane node in "embed-certs-858659" cluster
	I1212 00:34:22.368095  292217 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:34:22.369191  292217 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:34:22.370219  292217 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:34:22.370254  292217 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 00:34:22.370268  292217 cache.go:65] Caching tarball of preloaded images
	I1212 00:34:22.370312  292217 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:34:22.370362  292217 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:34:22.370377  292217 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 00:34:22.370514  292217 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/config.json ...
	I1212 00:34:22.391729  292217 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:34:22.391750  292217 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:34:22.391769  292217 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:34:22.391800  292217 start.go:360] acquireMachinesLock for embed-certs-858659: {Name:mk65733daa8eb01c9a3ad2d27b0888c2a1a8b319 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:34:22.391881  292217 start.go:364] duration metric: took 47.626µs to acquireMachinesLock for "embed-certs-858659"
	I1212 00:34:22.391906  292217 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:34:22.391912  292217 fix.go:54] fixHost starting: 
	I1212 00:34:22.392190  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:22.410761  292217 fix.go:112] recreateIfNeeded on embed-certs-858659: state=Stopped err=<nil>
	W1212 00:34:22.410787  292217 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:34:17.930547  287750 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:34:17.934787  287750 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1212 00:34:17.935919  287750 api_server.go:141] control plane version: v1.28.0
	I1212 00:34:17.935939  287750 api_server.go:131] duration metric: took 506.401624ms to wait for apiserver health ...
	I1212 00:34:17.935961  287750 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:34:17.939387  287750 system_pods.go:59] 8 kube-system pods found
	I1212 00:34:17.939432  287750 system_pods.go:61] "coredns-5dd5756b68-nxwdc" [e73711a2-208b-41a6-a47f-6253638cfdf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:17.939448  287750 system_pods.go:61] "etcd-old-k8s-version-743506" [945b4dc3-f44c-48ec-8a03-ec0d012b5e09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:17.939461  287750 system_pods.go:61] "kindnet-s2gvw" [21d0881b-1da3-4a1d-967d-8f108d5d8a1f] Running
	I1212 00:34:17.939470  287750 system_pods.go:61] "kube-apiserver-old-k8s-version-743506" [13ddd011-c962-4b44-8b26-b80bf8df1e13] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:17.939494  287750 system_pods.go:61] "kube-controller-manager-old-k8s-version-743506" [3df133c8-f899-4f35-b4bf-d1849a7262e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:17.939506  287750 system_pods.go:61] "kube-proxy-pz8kt" [671c52f4-19ce-4b7b-8e97-e43f64cd4aeb] Running
	I1212 00:34:17.939514  287750 system_pods.go:61] "kube-scheduler-old-k8s-version-743506" [3e6b9a7d-5346-47dd-af74-f9f8b6904163] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:17.939523  287750 system_pods.go:61] "storage-provisioner" [ccc4d4a0-b9c6-4653-90dc-113128acc782] Running
	I1212 00:34:17.939531  287750 system_pods.go:74] duration metric: took 3.56333ms to wait for pod list to return data ...
	I1212 00:34:17.939542  287750 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:34:17.941406  287750 default_sa.go:45] found service account: "default"
	I1212 00:34:17.941423  287750 default_sa.go:55] duration metric: took 1.872906ms for default service account to be created ...
	I1212 00:34:17.941431  287750 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:34:17.944007  287750 system_pods.go:86] 8 kube-system pods found
	I1212 00:34:17.944034  287750 system_pods.go:89] "coredns-5dd5756b68-nxwdc" [e73711a2-208b-41a6-a47f-6253638cfdf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:17.944044  287750 system_pods.go:89] "etcd-old-k8s-version-743506" [945b4dc3-f44c-48ec-8a03-ec0d012b5e09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:17.944052  287750 system_pods.go:89] "kindnet-s2gvw" [21d0881b-1da3-4a1d-967d-8f108d5d8a1f] Running
	I1212 00:34:17.944060  287750 system_pods.go:89] "kube-apiserver-old-k8s-version-743506" [13ddd011-c962-4b44-8b26-b80bf8df1e13] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:17.944069  287750 system_pods.go:89] "kube-controller-manager-old-k8s-version-743506" [3df133c8-f899-4f35-b4bf-d1849a7262e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:17.944079  287750 system_pods.go:89] "kube-proxy-pz8kt" [671c52f4-19ce-4b7b-8e97-e43f64cd4aeb] Running
	I1212 00:34:17.944088  287750 system_pods.go:89] "kube-scheduler-old-k8s-version-743506" [3e6b9a7d-5346-47dd-af74-f9f8b6904163] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:17.944094  287750 system_pods.go:89] "storage-provisioner" [ccc4d4a0-b9c6-4653-90dc-113128acc782] Running
	I1212 00:34:17.944105  287750 system_pods.go:126] duration metric: took 2.668947ms to wait for k8s-apps to be running ...
	I1212 00:34:17.944116  287750 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:34:17.944183  287750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:34:17.956567  287750 system_svc.go:56] duration metric: took 12.447031ms WaitForService to wait for kubelet
	I1212 00:34:17.956589  287750 kubeadm.go:587] duration metric: took 3.757690609s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:34:17.956609  287750 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:34:17.958554  287750 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:34:17.958575  287750 node_conditions.go:123] node cpu capacity is 8
	I1212 00:34:17.958593  287750 node_conditions.go:105] duration metric: took 1.974354ms to run NodePressure ...
	I1212 00:34:17.958607  287750 start.go:242] waiting for startup goroutines ...
	I1212 00:34:17.958622  287750 start.go:247] waiting for cluster config update ...
	I1212 00:34:17.958640  287750 start.go:256] writing updated cluster config ...
	I1212 00:34:17.958881  287750 ssh_runner.go:195] Run: rm -f paused
	I1212 00:34:17.962438  287750 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:34:17.965710  287750 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-nxwdc" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 00:34:19.971812  287750 pod_ready.go:104] pod "coredns-5dd5756b68-nxwdc" is not "Ready", error: node "old-k8s-version-743506" hosting pod "coredns-5dd5756b68-nxwdc" is not "Ready" (will retry)
	W1212 00:34:21.972884  287750 pod_ready.go:104] pod "coredns-5dd5756b68-nxwdc" is not "Ready", error: node "old-k8s-version-743506" hosting pod "coredns-5dd5756b68-nxwdc" is not "Ready" (will retry)
	I1212 00:34:21.825329  290093 cli_runner.go:164] Run: docker container inspect no-preload-675290 --format={{.State.Status}}
	I1212 00:34:21.826261  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 00:34:21.826281  290093 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 00:34:21.826370  290093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-675290
	I1212 00:34:21.849641  290093 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:34:21.849646  290093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/no-preload-675290/id_rsa Username:docker}
	I1212 00:34:21.849664  290093 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:34:21.849826  290093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-675290
	I1212 00:34:21.851877  290093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/no-preload-675290/id_rsa Username:docker}
	I1212 00:34:21.890953  290093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/no-preload-675290/id_rsa Username:docker}
	I1212 00:34:21.983765  290093 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:34:21.984639  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 00:34:21.984658  290093 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 00:34:21.992024  290093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:34:22.003673  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 00:34:22.003694  290093 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 00:34:22.005666  290093 node_ready.go:35] waiting up to 6m0s for node "no-preload-675290" to be "Ready" ...
	I1212 00:34:22.016144  290093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:34:22.022417  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 00:34:22.022439  290093 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 00:34:22.043609  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 00:34:22.043666  290093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 00:34:22.064203  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 00:34:22.064230  290093 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 00:34:22.081605  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 00:34:22.081626  290093 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 00:34:22.098674  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 00:34:22.098715  290093 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 00:34:22.115963  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 00:34:22.115987  290093 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 00:34:22.132839  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:34:22.132864  290093 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 00:34:22.148773  290093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:34:22.954440  290093 node_ready.go:49] node "no-preload-675290" is "Ready"
	I1212 00:34:22.954493  290093 node_ready.go:38] duration metric: took 948.786308ms for node "no-preload-675290" to be "Ready" ...
	I1212 00:34:22.954513  290093 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:34:22.954568  290093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:34:23.460200  290093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.468149091s)
	I1212 00:34:23.460313  290093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.444116968s)
	I1212 00:34:23.460400  290093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.311593953s)
	I1212 00:34:23.460429  290093 api_server.go:72] duration metric: took 1.662216059s to wait for apiserver process to appear ...
	I1212 00:34:23.460441  290093 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:34:23.460498  290093 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:34:23.461985  290093 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-675290 addons enable metrics-server
	
	I1212 00:34:23.465588  290093 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:34:23.465612  290093 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:34:23.466996  290093 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1212 00:34:23.468160  290093 addons.go:530] duration metric: took 1.669912774s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1212 00:34:23.961381  290093 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:34:23.967063  290093 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:34:23.967088  290093 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:34:24.460610  290093 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:34:24.465379  290093 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 00:34:24.466274  290093 api_server.go:141] control plane version: v1.35.0-beta.0
	I1212 00:34:24.466296  290093 api_server.go:131] duration metric: took 1.005848918s to wait for apiserver health ...
	I1212 00:34:24.466304  290093 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:34:24.469960  290093 system_pods.go:59] 8 kube-system pods found
	I1212 00:34:24.470002  290093 system_pods.go:61] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:24.470020  290093 system_pods.go:61] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:24.470030  290093 system_pods.go:61] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:34:24.470042  290093 system_pods.go:61] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:24.470052  290093 system_pods.go:61] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:24.470065  290093 system_pods.go:61] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:34:24.470075  290093 system_pods.go:61] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:24.470083  290093 system_pods.go:61] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Running
	I1212 00:34:24.470095  290093 system_pods.go:74] duration metric: took 3.783504ms to wait for pod list to return data ...
	I1212 00:34:24.470107  290093 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:34:24.472424  290093 default_sa.go:45] found service account: "default"
	I1212 00:34:24.472446  290093 default_sa.go:55] duration metric: took 2.32759ms for default service account to be created ...
	I1212 00:34:24.472455  290093 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:34:24.474765  290093 system_pods.go:86] 8 kube-system pods found
	I1212 00:34:24.474789  290093 system_pods.go:89] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:24.474797  290093 system_pods.go:89] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:24.474802  290093 system_pods.go:89] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:34:24.474807  290093 system_pods.go:89] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:24.474812  290093 system_pods.go:89] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:24.474819  290093 system_pods.go:89] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:34:24.474824  290093 system_pods.go:89] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:24.474828  290093 system_pods.go:89] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Running
	I1212 00:34:24.474837  290093 system_pods.go:126] duration metric: took 2.375958ms to wait for k8s-apps to be running ...
	I1212 00:34:24.474842  290093 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:34:24.474880  290093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:34:24.487107  290093 system_svc.go:56] duration metric: took 12.259625ms WaitForService to wait for kubelet
	I1212 00:34:24.487123  290093 kubeadm.go:587] duration metric: took 2.688911743s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:34:24.487151  290093 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:34:24.489298  290093 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:34:24.489318  290093 node_conditions.go:123] node cpu capacity is 8
	I1212 00:34:24.489333  290093 node_conditions.go:105] duration metric: took 2.175894ms to run NodePressure ...
	I1212 00:34:24.489343  290093 start.go:242] waiting for startup goroutines ...
	I1212 00:34:24.489352  290093 start.go:247] waiting for cluster config update ...
	I1212 00:34:24.489362  290093 start.go:256] writing updated cluster config ...
	I1212 00:34:24.489611  290093 ssh_runner.go:195] Run: rm -f paused
	I1212 00:34:24.493607  290093 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:34:24.496951  290093 pod_ready.go:83] waiting for pod "coredns-7d764666f9-44t4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:22.412332  292217 out.go:252] * Restarting existing docker container for "embed-certs-858659" ...
	I1212 00:34:22.412393  292217 cli_runner.go:164] Run: docker start embed-certs-858659
	I1212 00:34:22.675395  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:22.697554  292217 kic.go:430] container "embed-certs-858659" state is running.
	I1212 00:34:22.698003  292217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:34:22.721205  292217 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/config.json ...
	I1212 00:34:22.721434  292217 machine.go:94] provisionDockerMachine start ...
	I1212 00:34:22.721530  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:22.740223  292217 main.go:143] libmachine: Using SSH client type: native
	I1212 00:34:22.740531  292217 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1212 00:34:22.740552  292217 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:34:22.741123  292217 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48100->127.0.0.1:33083: read: connection reset by peer
	I1212 00:34:25.873905  292217 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-858659
	
	I1212 00:34:25.873938  292217 ubuntu.go:182] provisioning hostname "embed-certs-858659"
	I1212 00:34:25.874010  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:25.891640  292217 main.go:143] libmachine: Using SSH client type: native
	I1212 00:34:25.891843  292217 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1212 00:34:25.891854  292217 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-858659 && echo "embed-certs-858659" | sudo tee /etc/hostname
	I1212 00:34:26.033680  292217 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-858659
	
	I1212 00:34:26.033749  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:26.054661  292217 main.go:143] libmachine: Using SSH client type: native
	I1212 00:34:26.054969  292217 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1212 00:34:26.055001  292217 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-858659' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-858659/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-858659' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:34:26.193045  292217 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:34:26.193085  292217 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:34:26.193135  292217 ubuntu.go:190] setting up certificates
	I1212 00:34:26.193149  292217 provision.go:84] configureAuth start
	I1212 00:34:26.193222  292217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:34:26.210729  292217 provision.go:143] copyHostCerts
	I1212 00:34:26.210790  292217 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:34:26.210805  292217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:34:26.210864  292217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:34:26.211018  292217 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:34:26.211030  292217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:34:26.211064  292217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:34:26.211138  292217 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:34:26.211145  292217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:34:26.211176  292217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:34:26.211239  292217 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.embed-certs-858659 san=[127.0.0.1 192.168.94.2 embed-certs-858659 localhost minikube]
	I1212 00:34:26.334330  292217 provision.go:177] copyRemoteCerts
	I1212 00:34:26.334387  292217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:34:26.334432  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:26.352293  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:26.448550  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:34:26.465534  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:34:26.482790  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:34:26.500628  292217 provision.go:87] duration metric: took 307.45892ms to configureAuth
	I1212 00:34:26.500654  292217 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:34:26.500854  292217 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:34:26.500972  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:26.518572  292217 main.go:143] libmachine: Using SSH client type: native
	I1212 00:34:26.518811  292217 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1212 00:34:26.518834  292217 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:34:26.850738  292217 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:34:26.850803  292217 machine.go:97] duration metric: took 4.12935252s to provisionDockerMachine
	I1212 00:34:26.850819  292217 start.go:293] postStartSetup for "embed-certs-858659" (driver="docker")
	I1212 00:34:26.850842  292217 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:34:26.850914  292217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:34:26.850984  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:26.871065  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:26.966453  292217 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:34:26.970137  292217 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:34:26.970162  292217 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:34:26.970172  292217 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:34:26.970227  292217 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:34:26.970325  292217 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:34:26.970442  292217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:34:26.978705  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:34:26.995716  292217 start.go:296] duration metric: took 144.870061ms for postStartSetup
	I1212 00:34:26.995782  292217 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:34:26.995835  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:27.014285  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:27.105922  292217 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:34:27.110345  292217 fix.go:56] duration metric: took 4.718428372s for fixHost
	I1212 00:34:27.110371  292217 start.go:83] releasing machines lock for "embed-certs-858659", held for 4.718475367s
	I1212 00:34:27.110437  292217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:34:27.127373  292217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:34:27.127396  292217 ssh_runner.go:195] Run: cat /version.json
	I1212 00:34:27.127437  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:27.127445  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:27.144516  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:27.145862  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	W1212 00:34:24.471877  287750 pod_ready.go:104] pod "coredns-5dd5756b68-nxwdc" is not "Ready", error: node "old-k8s-version-743506" hosting pod "coredns-5dd5756b68-nxwdc" is not "Ready" (will retry)
	I1212 00:34:26.971400  287750 pod_ready.go:94] pod "coredns-5dd5756b68-nxwdc" is "Ready"
	I1212 00:34:26.971427  287750 pod_ready.go:86] duration metric: took 9.005696281s for pod "coredns-5dd5756b68-nxwdc" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:26.974242  287750 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:27.236084  292217 ssh_runner.go:195] Run: systemctl --version
	I1212 00:34:27.289011  292217 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:34:27.320985  292217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:34:27.325714  292217 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:34:27.325777  292217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:34:27.334535  292217 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:34:27.334554  292217 start.go:496] detecting cgroup driver to use...
	I1212 00:34:27.334579  292217 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:34:27.334633  292217 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:34:27.348435  292217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:34:27.359652  292217 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:34:27.359703  292217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:34:27.374109  292217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:34:27.386469  292217 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:34:27.460054  292217 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:34:27.535041  292217 docker.go:234] disabling docker service ...
	I1212 00:34:27.535088  292217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:34:27.548165  292217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:34:27.559573  292217 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:34:27.632790  292217 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:34:27.713620  292217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:34:27.725354  292217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:34:27.738210  292217 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:34:27.738258  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.746385  292217 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:34:27.746427  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.754504  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.762234  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.770331  292217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:34:27.777754  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.785740  292217 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.793156  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.800928  292217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:34:27.807604  292217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:34:27.814250  292217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:34:27.892059  292217 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:34:28.023411  292217 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:34:28.023508  292217 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:34:28.027324  292217 start.go:564] Will wait 60s for crictl version
	I1212 00:34:28.027377  292217 ssh_runner.go:195] Run: which crictl
	I1212 00:34:28.030733  292217 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:34:28.053419  292217 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:34:28.053521  292217 ssh_runner.go:195] Run: crio --version
	I1212 00:34:28.078218  292217 ssh_runner.go:195] Run: crio --version
	I1212 00:34:28.104986  292217 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:34:28.106118  292217 cli_runner.go:164] Run: docker network inspect embed-certs-858659 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:34:28.122845  292217 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1212 00:34:28.127252  292217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:34:28.136929  292217 kubeadm.go:884] updating cluster {Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:34:28.137027  292217 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:34:28.137068  292217 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:34:28.166606  292217 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:34:28.166625  292217 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:34:28.166660  292217 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:34:28.189997  292217 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:34:28.190014  292217 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:34:28.190022  292217 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1212 00:34:28.190122  292217 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-858659 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:34:28.190197  292217 ssh_runner.go:195] Run: crio config
	I1212 00:34:28.233455  292217 cni.go:84] Creating CNI manager for ""
	I1212 00:34:28.233501  292217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:34:28.233520  292217 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:34:28.233549  292217 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-858659 NodeName:embed-certs-858659 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:34:28.233667  292217 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-858659"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:34:28.233728  292217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:34:28.241238  292217 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:34:28.241286  292217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:34:28.248529  292217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1212 00:34:28.259931  292217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:34:28.272064  292217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1212 00:34:28.283961  292217 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:34:28.287295  292217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:34:28.296174  292217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:34:28.374313  292217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:34:28.399445  292217 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659 for IP: 192.168.94.2
	I1212 00:34:28.399469  292217 certs.go:195] generating shared ca certs ...
	I1212 00:34:28.399502  292217 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:28.399682  292217 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:34:28.399740  292217 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:34:28.399754  292217 certs.go:257] generating profile certs ...
	I1212 00:34:28.399858  292217 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.key
	I1212 00:34:28.399921  292217 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key.89584afc
	I1212 00:34:28.399969  292217 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key
	I1212 00:34:28.400101  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:34:28.400154  292217 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:34:28.400167  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:34:28.400199  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:34:28.400232  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:34:28.400265  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:34:28.400324  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:34:28.401140  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:34:28.418445  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:34:28.435360  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:34:28.454053  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:34:28.476744  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 00:34:28.494464  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:34:28.512911  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:34:28.532124  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:34:28.550167  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:34:28.569797  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:34:28.588171  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:34:28.604980  292217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:34:28.616294  292217 ssh_runner.go:195] Run: openssl version
	I1212 00:34:28.621964  292217 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:34:28.628771  292217 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:34:28.635451  292217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:34:28.638885  292217 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:34:28.638945  292217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:34:28.673421  292217 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:34:28.680066  292217 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:28.686678  292217 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:34:28.693326  292217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:28.696643  292217 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:28.696676  292217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:28.730803  292217 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:34:28.737734  292217 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:34:28.744631  292217 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:34:28.751977  292217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:34:28.755400  292217 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:34:28.755447  292217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:34:28.789571  292217 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:34:28.796415  292217 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:34:28.799981  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:34:28.833241  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:34:28.867120  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:34:28.906082  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:34:28.954312  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:34:28.999466  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:34:29.061379  292217 kubeadm.go:401] StartCluster: {Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:34:29.061506  292217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:34:29.061568  292217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:34:29.094454  292217 cri.go:89] found id: "6eb73517ebee81333693231ad0296e35a758eeb5aa9cc45c7ee7663ea6e652e4"
	I1212 00:34:29.094504  292217 cri.go:89] found id: "07f4a35d8d4d1bc19d0c0dc6b015f381bc482dc379a9a416e57528498aa8e1e5"
	I1212 00:34:29.094512  292217 cri.go:89] found id: "a4dfa21dd3b082f3a9464428da14520464aa97d1551c9a8ddffbf56d063877a6"
	I1212 00:34:29.094516  292217 cri.go:89] found id: "3da8c9c634a0528028b8a0875544b6a0c41e72dc8bf1ff1da95beccf80094376"
	I1212 00:34:29.094520  292217 cri.go:89] found id: ""
	I1212 00:34:29.094580  292217 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 00:34:29.106285  292217 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:34:29Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:34:29.106353  292217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:34:29.113726  292217 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:34:29.113742  292217 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:34:29.113783  292217 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:34:29.120655  292217 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:34:29.121366  292217 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-858659" does not appear in /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:34:29.121810  292217 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-10975/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-858659" cluster setting kubeconfig missing "embed-certs-858659" context setting]
	I1212 00:34:29.122410  292217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:29.123956  292217 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:34:29.131575  292217 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1212 00:34:29.131601  292217 kubeadm.go:602] duration metric: took 17.853493ms to restartPrimaryControlPlane
	I1212 00:34:29.131610  292217 kubeadm.go:403] duration metric: took 70.240665ms to StartCluster
	I1212 00:34:29.131624  292217 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:29.131695  292217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:34:29.133806  292217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:29.134050  292217 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:34:29.134111  292217 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:34:29.134220  292217 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-858659"
	I1212 00:34:29.134242  292217 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-858659"
	W1212 00:34:29.134251  292217 addons.go:248] addon storage-provisioner should already be in state true
	I1212 00:34:29.134244  292217 addons.go:70] Setting dashboard=true in profile "embed-certs-858659"
	I1212 00:34:29.134268  292217 addons.go:239] Setting addon dashboard=true in "embed-certs-858659"
	I1212 00:34:29.134259  292217 addons.go:70] Setting default-storageclass=true in profile "embed-certs-858659"
	W1212 00:34:29.134278  292217 addons.go:248] addon dashboard should already be in state true
	I1212 00:34:29.134290  292217 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:34:29.134294  292217 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:34:29.134312  292217 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:34:29.134291  292217 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-858659"
	I1212 00:34:29.134698  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:29.134803  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:29.134819  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:29.135940  292217 out.go:179] * Verifying Kubernetes components...
	I1212 00:34:29.137294  292217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:34:29.160012  292217 addons.go:239] Setting addon default-storageclass=true in "embed-certs-858659"
	W1212 00:34:29.160036  292217 addons.go:248] addon default-storageclass should already be in state true
	I1212 00:34:29.160062  292217 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:34:29.160531  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:29.161454  292217 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:34:29.162569  292217 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 00:34:29.162613  292217 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:34:29.162631  292217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:34:29.162683  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:29.164828  292217 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1212 00:34:26.502049  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: node "no-preload-675290" hosting pod "coredns-7d764666f9-44t4m" is not "Ready" (will retry)
	W1212 00:34:28.503136  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: node "no-preload-675290" hosting pod "coredns-7d764666f9-44t4m" is not "Ready" (will retry)
	I1212 00:34:29.166119  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 00:34:29.166135  292217 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 00:34:29.166188  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:29.192491  292217 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:34:29.192516  292217 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:34:29.192574  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:29.196179  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:29.198591  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:29.216901  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:29.287840  292217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:34:29.302952  292217 node_ready.go:35] waiting up to 6m0s for node "embed-certs-858659" to be "Ready" ...
	I1212 00:34:29.317900  292217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:34:29.321896  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 00:34:29.321912  292217 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 00:34:29.343648  292217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:34:29.344330  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 00:34:29.344372  292217 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 00:34:29.363923  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 00:34:29.363955  292217 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 00:34:29.381875  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 00:34:29.381897  292217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 00:34:29.396784  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 00:34:29.396803  292217 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 00:34:29.410654  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 00:34:29.410676  292217 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 00:34:29.425501  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 00:34:29.425524  292217 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 00:34:29.440231  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 00:34:29.440252  292217 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 00:34:29.452746  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:34:29.452766  292217 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 00:34:29.466329  292217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:34:30.818434  292217 node_ready.go:49] node "embed-certs-858659" is "Ready"
	I1212 00:34:30.818487  292217 node_ready.go:38] duration metric: took 1.515392528s for node "embed-certs-858659" to be "Ready" ...
	I1212 00:34:30.818508  292217 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:34:30.818565  292217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:34:31.468911  292217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.150974747s)
	I1212 00:34:31.468978  292217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.125284091s)
	I1212 00:34:31.469122  292217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.0027624s)
	I1212 00:34:31.469455  292217 api_server.go:72] duration metric: took 2.335374756s to wait for apiserver process to appear ...
	I1212 00:34:31.469505  292217 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:34:31.469524  292217 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 00:34:31.473590  292217 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-858659 addons enable metrics-server
	
	I1212 00:34:31.476459  292217 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:34:31.476505  292217 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:34:31.483691  292217 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1212 00:34:30.508733  263844 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.062075524s)
	W1212 00:34:30.508779  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1212 00:34:30.508790  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:30.508810  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:30.546082  263844 logs.go:123] Gathering logs for kube-apiserver [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106] ...
	I1212 00:34:30.546123  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:34:30.579092  263844 logs.go:123] Gathering logs for kube-controller-manager [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2] ...
	I1212 00:34:30.579118  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:30.604859  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:30.604882  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:30.657742  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:30.657769  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:30.671365  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:30.671388  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:30.705424  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:30.705450  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:31.484807  292217 addons.go:530] duration metric: took 2.350700971s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1212 00:34:31.969645  292217 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 00:34:31.975159  292217 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:34:31.975202  292217 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:34:28.982220  287750 pod_ready.go:104] pod "etcd-old-k8s-version-743506" is not "Ready", error: <nil>
	W1212 00:34:31.481681  287750 pod_ready.go:104] pod "etcd-old-k8s-version-743506" is not "Ready", error: <nil>
	I1212 00:34:31.981277  287750 pod_ready.go:94] pod "etcd-old-k8s-version-743506" is "Ready"
	I1212 00:34:31.981308  287750 pod_ready.go:86] duration metric: took 5.007040467s for pod "etcd-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:31.985958  287750 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:31.993506  287750 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-743506" is "Ready"
	I1212 00:34:31.993527  287750 pod_ready.go:86] duration metric: took 7.548054ms for pod "kube-apiserver-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:31.998043  287750 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.003355  287750 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-743506" is "Ready"
	I1212 00:34:32.003413  287750 pod_ready.go:86] duration metric: took 5.344333ms for pod "kube-controller-manager-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.006576  287750 pod_ready.go:83] waiting for pod "kube-proxy-pz8kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.178403  287750 pod_ready.go:94] pod "kube-proxy-pz8kt" is "Ready"
	I1212 00:34:32.178429  287750 pod_ready.go:86] duration metric: took 171.831568ms for pod "kube-proxy-pz8kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.379015  287750 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.778296  287750 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-743506" is "Ready"
	I1212 00:34:32.778333  287750 pod_ready.go:86] duration metric: took 399.28376ms for pod "kube-scheduler-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.778358  287750 pod_ready.go:40] duration metric: took 14.8158908s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:34:32.833106  287750 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1212 00:34:32.835985  287750 out.go:203] 
	W1212 00:34:32.837089  287750 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1212 00:34:32.838320  287750 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1212 00:34:32.839516  287750 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-743506" cluster and "default" namespace by default
	W1212 00:34:31.007103  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: node "no-preload-675290" hosting pod "coredns-7d764666f9-44t4m" is not "Ready" (will retry)
	W1212 00:34:33.503221  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: <nil>
	I1212 00:34:33.249325  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:34.659371  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:57038->192.168.85.2:8443: read: connection reset by peer
	I1212 00:34:34.659452  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:34.659598  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:34.720459  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:34.720562  263844 cri.go:89] found id: "e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:34:34.720572  263844 cri.go:89] found id: ""
	I1212 00:34:34.720583  263844 logs.go:282] 2 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106]
	I1212 00:34:34.720649  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:34.727624  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:34.732978  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:34.733038  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:34.771884  263844 cri.go:89] found id: ""
	I1212 00:34:34.771911  263844 logs.go:282] 0 containers: []
	W1212 00:34:34.771923  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:34.771930  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:34.771985  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:34.813253  263844 cri.go:89] found id: ""
	I1212 00:34:34.813292  263844 logs.go:282] 0 containers: []
	W1212 00:34:34.813304  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:34.813313  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:34.813375  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:34.854049  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:34.854075  263844 cri.go:89] found id: ""
	I1212 00:34:34.854084  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:34.854152  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:34.860190  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:34.860258  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:34.898840  263844 cri.go:89] found id: ""
	I1212 00:34:34.898872  263844 logs.go:282] 0 containers: []
	W1212 00:34:34.898883  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:34.898891  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:34.898952  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:34:34.937834  263844 cri.go:89] found id: "92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:34.937859  263844 cri.go:89] found id: ""
	I1212 00:34:34.937869  263844 logs.go:282] 1 containers: [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2]
	I1212 00:34:34.937925  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:34.944202  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:34:34.944414  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:34:34.982162  263844 cri.go:89] found id: ""
	I1212 00:34:34.982222  263844 logs.go:282] 0 containers: []
	W1212 00:34:34.982233  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:34:34.982250  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:34:34.982348  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:34:35.028882  263844 cri.go:89] found id: ""
	I1212 00:34:35.028907  263844 logs.go:282] 0 containers: []
	W1212 00:34:35.028919  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:34:35.028935  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:35.028955  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:35.123296  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:35.123511  263844 logs.go:123] Gathering logs for kube-apiserver [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106] ...
	I1212 00:34:35.123684  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:34:35.174086  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:35.174189  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:35.261332  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:35.261372  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:35.311106  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:34:35.311138  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:34:35.426171  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:35.426206  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:35.471138  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:35.471171  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:35.510352  263844 logs.go:123] Gathering logs for kube-controller-manager [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2] ...
	I1212 00:34:35.510384  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:35.548527  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:35.548558  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:32.469729  292217 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 00:34:32.474936  292217 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1212 00:34:32.475992  292217 api_server.go:141] control plane version: v1.34.2
	I1212 00:34:32.476016  292217 api_server.go:131] duration metric: took 1.006503678s to wait for apiserver health ...
	I1212 00:34:32.476025  292217 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:34:32.479628  292217 system_pods.go:59] 8 kube-system pods found
	I1212 00:34:32.479664  292217 system_pods.go:61] "coredns-66bc5c9577-8x66p" [1e3ac279-c897-4100-aa49-a94ed95d1b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:32.479681  292217 system_pods.go:61] "etcd-embed-certs-858659" [a6f85bfe-d87e-4cd1-a34a-6517b835d825] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:32.479695  292217 system_pods.go:61] "kindnet-9jvdg" [295eca47-46bb-43bf-981b-7320ba579410] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 00:34:32.479711  292217 system_pods.go:61] "kube-apiserver-embed-certs-858659" [b0ed7f59-ed08-4325-a71e-b6fc09d2f588] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:32.479726  292217 system_pods.go:61] "kube-controller-manager-embed-certs-858659" [c5be6475-d2da-40b2-97eb-4df8cd8a51db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:32.479738  292217 system_pods.go:61] "kube-proxy-httpr" [d6220e54-3a3a-4fbe-94e1-d0117757204a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 00:34:32.479757  292217 system_pods.go:61] "kube-scheduler-embed-certs-858659" [2c4596a5-b170-4c02-93ef-32fac96600c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:32.479769  292217 system_pods.go:61] "storage-provisioner" [1e3f7607-a2f4-4ca4-84c0-8cffb038ee03] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:34:32.479777  292217 system_pods.go:74] duration metric: took 3.744489ms to wait for pod list to return data ...
	I1212 00:34:32.479789  292217 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:34:32.481998  292217 default_sa.go:45] found service account: "default"
	I1212 00:34:32.482021  292217 default_sa.go:55] duration metric: took 2.221892ms for default service account to be created ...
	I1212 00:34:32.482031  292217 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:34:32.484891  292217 system_pods.go:86] 8 kube-system pods found
	I1212 00:34:32.484922  292217 system_pods.go:89] "coredns-66bc5c9577-8x66p" [1e3ac279-c897-4100-aa49-a94ed95d1b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:32.484941  292217 system_pods.go:89] "etcd-embed-certs-858659" [a6f85bfe-d87e-4cd1-a34a-6517b835d825] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:32.484954  292217 system_pods.go:89] "kindnet-9jvdg" [295eca47-46bb-43bf-981b-7320ba579410] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 00:34:32.484963  292217 system_pods.go:89] "kube-apiserver-embed-certs-858659" [b0ed7f59-ed08-4325-a71e-b6fc09d2f588] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:32.484982  292217 system_pods.go:89] "kube-controller-manager-embed-certs-858659" [c5be6475-d2da-40b2-97eb-4df8cd8a51db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:32.484994  292217 system_pods.go:89] "kube-proxy-httpr" [d6220e54-3a3a-4fbe-94e1-d0117757204a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 00:34:32.485002  292217 system_pods.go:89] "kube-scheduler-embed-certs-858659" [2c4596a5-b170-4c02-93ef-32fac96600c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:32.485010  292217 system_pods.go:89] "storage-provisioner" [1e3f7607-a2f4-4ca4-84c0-8cffb038ee03] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:34:32.485019  292217 system_pods.go:126] duration metric: took 2.981143ms to wait for k8s-apps to be running ...
	I1212 00:34:32.485026  292217 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:34:32.485080  292217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:34:32.502896  292217 system_svc.go:56] duration metric: took 17.862237ms WaitForService to wait for kubelet
	I1212 00:34:32.502923  292217 kubeadm.go:587] duration metric: took 3.368842736s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:34:32.502943  292217 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:34:32.506066  292217 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:34:32.506091  292217 node_conditions.go:123] node cpu capacity is 8
	I1212 00:34:32.506107  292217 node_conditions.go:105] duration metric: took 3.15793ms to run NodePressure ...
	I1212 00:34:32.506121  292217 start.go:242] waiting for startup goroutines ...
	I1212 00:34:32.506137  292217 start.go:247] waiting for cluster config update ...
	I1212 00:34:32.506152  292217 start.go:256] writing updated cluster config ...
	I1212 00:34:32.506449  292217 ssh_runner.go:195] Run: rm -f paused
	I1212 00:34:32.511014  292217 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:34:32.516674  292217 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8x66p" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 00:34:34.527697  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:34:36.531266  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:34:35.509694  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: <nil>
	W1212 00:34:38.002902  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: <nil>
	I1212 00:34:39.003919  290093 pod_ready.go:94] pod "coredns-7d764666f9-44t4m" is "Ready"
	I1212 00:34:39.003948  290093 pod_ready.go:86] duration metric: took 14.506978089s for pod "coredns-7d764666f9-44t4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.007164  290093 pod_ready.go:83] waiting for pod "etcd-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.012443  290093 pod_ready.go:94] pod "etcd-no-preload-675290" is "Ready"
	I1212 00:34:39.012467  290093 pod_ready.go:86] duration metric: took 5.280222ms for pod "etcd-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.015470  290093 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.019749  290093 pod_ready.go:94] pod "kube-apiserver-no-preload-675290" is "Ready"
	I1212 00:34:39.019769  290093 pod_ready.go:86] duration metric: took 4.25314ms for pod "kube-apiserver-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.022432  290093 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.603008  290093 pod_ready.go:94] pod "kube-controller-manager-no-preload-675290" is "Ready"
	I1212 00:34:39.603053  290093 pod_ready.go:86] duration metric: took 580.604646ms for pod "kube-controller-manager-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.801951  290093 pod_ready.go:83] waiting for pod "kube-proxy-7pxpp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:40.202574  290093 pod_ready.go:94] pod "kube-proxy-7pxpp" is "Ready"
	I1212 00:34:40.202612  290093 pod_ready.go:86] duration metric: took 400.60658ms for pod "kube-proxy-7pxpp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:40.402173  290093 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:40.801438  290093 pod_ready.go:94] pod "kube-scheduler-no-preload-675290" is "Ready"
	I1212 00:34:40.801466  290093 pod_ready.go:86] duration metric: took 399.266926ms for pod "kube-scheduler-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:40.801497  290093 pod_ready.go:40] duration metric: took 16.307864565s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:34:40.857643  290093 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1212 00:34:40.899899  290093 out.go:179] * Done! kubectl is now configured to use "no-preload-675290" cluster and "default" namespace by default
	I1212 00:34:38.070909  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:38.071317  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:34:38.071368  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:38.071437  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:38.100977  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:38.101000  263844 cri.go:89] found id: ""
	I1212 00:34:38.101008  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:34:38.101055  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:38.105578  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:38.105642  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:38.137933  263844 cri.go:89] found id: ""
	I1212 00:34:38.137961  263844 logs.go:282] 0 containers: []
	W1212 00:34:38.137977  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:38.137986  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:38.138051  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:38.172506  263844 cri.go:89] found id: ""
	I1212 00:34:38.172711  263844 logs.go:282] 0 containers: []
	W1212 00:34:38.172783  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:38.172849  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:38.172980  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:38.209393  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:38.209411  263844 cri.go:89] found id: ""
	I1212 00:34:38.209418  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:38.209463  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:38.213539  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:38.213610  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:38.255964  263844 cri.go:89] found id: ""
	I1212 00:34:38.255987  263844 logs.go:282] 0 containers: []
	W1212 00:34:38.255997  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:38.256005  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:38.256070  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:34:38.294229  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:38.294319  263844 cri.go:89] found id: "92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:38.294326  263844 cri.go:89] found id: ""
	I1212 00:34:38.294333  263844 logs.go:282] 2 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2]
	I1212 00:34:38.294395  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:38.299827  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:38.304884  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:34:38.304948  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:34:38.345668  263844 cri.go:89] found id: ""
	I1212 00:34:38.345711  263844 logs.go:282] 0 containers: []
	W1212 00:34:38.345724  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:34:38.345733  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:34:38.345800  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:34:38.383644  263844 cri.go:89] found id: ""
	I1212 00:34:38.383671  263844 logs.go:282] 0 containers: []
	W1212 00:34:38.383683  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:34:38.383703  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:34:38.383716  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:34:38.511578  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:38.511613  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:38.593958  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:38.593982  263844 logs.go:123] Gathering logs for kube-controller-manager [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2] ...
	I1212 00:34:38.593999  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:38.630769  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:38.630799  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:38.711173  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:38.711213  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:38.758424  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:38.758457  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:38.779867  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:38.779897  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:38.821126  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:38.821166  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:38.859790  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:34:38.859830  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:41.397624  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:41.398042  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:34:41.398100  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:41.398171  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:41.433192  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:41.433213  263844 cri.go:89] found id: ""
	I1212 00:34:41.433223  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:34:41.433281  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.437728  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:41.437792  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:41.463618  263844 cri.go:89] found id: ""
	I1212 00:34:41.463643  263844 logs.go:282] 0 containers: []
	W1212 00:34:41.463653  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:41.463660  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:41.463731  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:41.490995  263844 cri.go:89] found id: ""
	I1212 00:34:41.491018  263844 logs.go:282] 0 containers: []
	W1212 00:34:41.491026  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:41.491035  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:41.491093  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:41.518246  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:41.518267  263844 cri.go:89] found id: ""
	I1212 00:34:41.518276  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:41.518332  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.522787  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:41.522849  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:41.549671  263844 cri.go:89] found id: ""
	I1212 00:34:41.549706  263844 logs.go:282] 0 containers: []
	W1212 00:34:41.549716  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:41.549723  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:41.549783  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:34:41.577845  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:41.577868  263844 cri.go:89] found id: "92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:41.577874  263844 cri.go:89] found id: ""
	I1212 00:34:41.577882  263844 logs.go:282] 2 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2]
	I1212 00:34:41.577929  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.581784  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.585354  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:34:41.585419  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:34:41.615221  263844 cri.go:89] found id: ""
	I1212 00:34:41.615254  263844 logs.go:282] 0 containers: []
	W1212 00:34:41.615265  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:34:41.615274  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:34:41.615336  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:34:41.641215  263844 cri.go:89] found id: ""
	I1212 00:34:41.641238  263844 logs.go:282] 0 containers: []
	W1212 00:34:41.641248  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:34:41.641266  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:41.641280  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:41.699142  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:41.699160  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:41.699176  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:41.728077  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:41.728106  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:41.753905  263844 logs.go:123] Gathering logs for kube-controller-manager [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2] ...
	I1212 00:34:41.753927  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:41.778651  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:41.778679  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:41.807391  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:34:41.807414  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:41.831717  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:41.831741  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:41.882004  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:34:41.882028  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:34:41.958828  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:41.958859  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 00:34:39.022400  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:34:41.025312  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	I1212 00:34:44.472376  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:44.472832  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:34:44.472887  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:44.472950  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:44.499664  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:44.499683  263844 cri.go:89] found id: ""
	I1212 00:34:44.499690  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:34:44.499740  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:44.503544  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:44.503613  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:44.530338  263844 cri.go:89] found id: ""
	I1212 00:34:44.530363  263844 logs.go:282] 0 containers: []
	W1212 00:34:44.530373  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:44.530380  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:44.530421  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:44.556031  263844 cri.go:89] found id: ""
	I1212 00:34:44.556054  263844 logs.go:282] 0 containers: []
	W1212 00:34:44.556064  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:44.556071  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:44.556130  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:44.581377  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:44.581397  263844 cri.go:89] found id: ""
	I1212 00:34:44.581406  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:44.581504  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:44.585206  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:44.585254  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:44.609906  263844 cri.go:89] found id: ""
	I1212 00:34:44.609929  263844 logs.go:282] 0 containers: []
	W1212 00:34:44.609937  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:44.609942  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:44.609995  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:34:44.635568  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:44.635590  263844 cri.go:89] found id: "92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:44.635594  263844 cri.go:89] found id: ""
	I1212 00:34:44.635601  263844 logs.go:282] 2 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2]
	I1212 00:34:44.635645  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:44.639406  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:44.642913  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:34:44.642978  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:34:44.667080  263844 cri.go:89] found id: ""
	I1212 00:34:44.667105  263844 logs.go:282] 0 containers: []
	W1212 00:34:44.667114  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:34:44.667120  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:34:44.667166  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:34:44.690884  263844 cri.go:89] found id: ""
	I1212 00:34:44.690908  263844 logs.go:282] 0 containers: []
	W1212 00:34:44.690917  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:34:44.690929  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:44.690940  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:44.741690  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:44.741717  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:44.769952  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:34:44.769978  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:34:44.845857  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:44.845885  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:44.898939  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:44.898959  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:44.898973  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:44.929908  263844 logs.go:123] Gathering logs for kube-controller-manager [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2] ...
	I1212 00:34:44.929935  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:44.955084  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:44.955105  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:44.968097  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:44.968119  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:44.992542  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:34:44.992564  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	W1212 00:34:43.533865  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:34:46.022030  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 12 00:34:33 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:33.694598569Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5b48c3d11caad3c9bd7c8e78c456b1df729430761f10acd0694f36df144daba3/merged/etc/group: no such file or directory"
	Dec 12 00:34:33 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:33.69502243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:33 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:33.721378208Z" level=info msg="Created container c00ac9a4c0b0a4a61b6f5dcab16513d50e33b4ba5166606551722a4571ca2420: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jhg57/kubernetes-dashboard" id=4119b681-6d2e-49cb-8c26-0ca6373c60e5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:33 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:33.721971193Z" level=info msg="Starting container: c00ac9a4c0b0a4a61b6f5dcab16513d50e33b4ba5166606551722a4571ca2420" id=91661fc9-6ad0-4293-b3c4-403436cd0d4e name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:34:33 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:33.723824693Z" level=info msg="Started container" PID=1540 containerID=c00ac9a4c0b0a4a61b6f5dcab16513d50e33b4ba5166606551722a4571ca2420 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jhg57/kubernetes-dashboard id=91661fc9-6ad0-4293-b3c4-403436cd0d4e name=/runtime.v1.RuntimeService/StartContainer sandboxID=9993ac356f33f26cd31bd36e7037a905af90c1b2ace3acae58222c407baea99b
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.614669141Z" level=info msg="Pulled image: registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=85473009-3132-4aec-84f6-491cba3dfa68 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.615993928Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c78746d3-710b-464c-a3fd-07b522c19fb2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.619234444Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn/dashboard-metrics-scraper" id=397fcc47-aa50-49d7-95a8-d791be24f0fd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.619364372Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.628977387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.629674244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.657815875Z" level=info msg="Created container 23563703b720516cdc853ac623892f0e3d24ba7876e0a91d65dc37db5bf74036: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn/dashboard-metrics-scraper" id=397fcc47-aa50-49d7-95a8-d791be24f0fd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.659002058Z" level=info msg="Starting container: 23563703b720516cdc853ac623892f0e3d24ba7876e0a91d65dc37db5bf74036" id=a91dc954-e9a9-435e-bd30-00f7bf1ffb38 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.661879543Z" level=info msg="Started container" PID=1749 containerID=23563703b720516cdc853ac623892f0e3d24ba7876e0a91d65dc37db5bf74036 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn/dashboard-metrics-scraper id=a91dc954-e9a9-435e-bd30-00f7bf1ffb38 name=/runtime.v1.RuntimeService/StartContainer sandboxID=54ea13de95b23d7a5e580eab3ce50dc74364353eaf06d031d582dee24a97487d
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.67436137Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4645e31d-4290-46fb-a4c1-d916a3b48fc0 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.677452286Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0a625174-928f-4278-8615-fb38448b0680 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.680714091Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn/dashboard-metrics-scraper" id=ce66a5fe-95f7-4608-9ac0-b5d981aa7823 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.680850864Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.689626852Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.690352679Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.723033892Z" level=info msg="Created container e903d75d8008fddf71158194c0a15dd7caaba9ceb3c693696d81f596d545c2af: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn/dashboard-metrics-scraper" id=ce66a5fe-95f7-4608-9ac0-b5d981aa7823 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.723640377Z" level=info msg="Starting container: e903d75d8008fddf71158194c0a15dd7caaba9ceb3c693696d81f596d545c2af" id=3b2aa1cb-01cc-4506-90e0-ea1e44897ad8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.726280985Z" level=info msg="Started container" PID=1763 containerID=e903d75d8008fddf71158194c0a15dd7caaba9ceb3c693696d81f596d545c2af description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn/dashboard-metrics-scraper id=3b2aa1cb-01cc-4506-90e0-ea1e44897ad8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=54ea13de95b23d7a5e580eab3ce50dc74364353eaf06d031d582dee24a97487d
	Dec 12 00:34:38 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:38.68315069Z" level=info msg="Removing container: 23563703b720516cdc853ac623892f0e3d24ba7876e0a91d65dc37db5bf74036" id=819c9eb2-5109-48ec-bb70-ad122656a7a1 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:34:38 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:38.697426134Z" level=info msg="Removed container 23563703b720516cdc853ac623892f0e3d24ba7876e0a91d65dc37db5bf74036: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn/dashboard-metrics-scraper" id=819c9eb2-5109-48ec-bb70-ad122656a7a1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	e903d75d8008f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   1                   54ea13de95b23       dashboard-metrics-scraper-5f989dc9cf-r64gn       kubernetes-dashboard
	c00ac9a4c0b0a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   15 seconds ago      Running             kubernetes-dashboard        0                   9993ac356f33f       kubernetes-dashboard-8694d4445c-jhg57            kubernetes-dashboard
	590f315a0e414       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           25 seconds ago      Running             coredns                     0                   3bab8358c495f       coredns-5dd5756b68-nxwdc                         kube-system
	f1a9f94ef8f89       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           25 seconds ago      Running             busybox                     1                   526dd63209738       busybox                                          default
	d5c10c053a230       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           32 seconds ago      Running             kindnet-cni                 0                   767adbec4f854       kindnet-s2gvw                                    kube-system
	40f0aa5d7111a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           32 seconds ago      Exited              storage-provisioner         0                   e00e88e41f26b       storage-provisioner                              kube-system
	46ced247bb6cb       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           32 seconds ago      Running             kube-proxy                  0                   0f16dc7b7fe33       kube-proxy-pz8kt                                 kube-system
	17479f6c2196c       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           35 seconds ago      Running             kube-controller-manager     0                   493af1802641d       kube-controller-manager-old-k8s-version-743506   kube-system
	a0ad080a093dd       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           35 seconds ago      Running             kube-apiserver              0                   c52b2141f4cc4       kube-apiserver-old-k8s-version-743506            kube-system
	d463fe18198b2       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           35 seconds ago      Running             kube-scheduler              0                   ec21348ab30a5       kube-scheduler-old-k8s-version-743506            kube-system
	be598b8792667       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           35 seconds ago      Running             etcd                        0                   7eadf74e9de8c       etcd-old-k8s-version-743506                      kube-system
	
	
	==> coredns [590f315a0e41429b9947025cf60b15230faac5f9cb474a9172b52736e8344a73] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59572 - 64129 "HINFO IN 2520603107590759331.6970155470677605587. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.488605797s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-743506
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-743506
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=old-k8s-version-743506
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_33_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:33:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-743506
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:34:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:34:26 +0000   Fri, 12 Dec 2025 00:33:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:34:26 +0000   Fri, 12 Dec 2025 00:33:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:34:26 +0000   Fri, 12 Dec 2025 00:33:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:34:26 +0000   Fri, 12 Dec 2025 00:34:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-743506
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                6e4a36d1-9d16-43c1-a591-2e531ad940c7
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 coredns-5dd5756b68-nxwdc                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     86s
	  kube-system                 etcd-old-k8s-version-743506                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         99s
	  kube-system                 kindnet-s2gvw                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      87s
	  kube-system                 kube-apiserver-old-k8s-version-743506             250m (3%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-old-k8s-version-743506    200m (2%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-pz8kt                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-scheduler-old-k8s-version-743506             100m (1%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-r64gn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-jhg57             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 84s                kube-proxy       
	  Normal  Starting                 32s                kube-proxy       
	  Normal  NodeHasSufficientMemory  99s                kubelet          Node old-k8s-version-743506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                kubelet          Node old-k8s-version-743506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                kubelet          Node old-k8s-version-743506 status is now: NodeHasSufficientPID
	  Normal  Starting                 99s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           87s                node-controller  Node old-k8s-version-743506 event: Registered Node old-k8s-version-743506 in Controller
	  Normal  NodeReady                72s                kubelet          Node old-k8s-version-743506 status is now: NodeReady
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node old-k8s-version-743506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node old-k8s-version-743506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node old-k8s-version-743506 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20s                node-controller  Node old-k8s-version-743506 event: Registered Node old-k8s-version-743506 in Controller
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [be598b879266780ddd79387bcfed0dfa8ab737f26c38131f8d1479fbb3247bab] <==
	{"level":"info","ts":"2025-12-12T00:34:14.137189Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-12T00:34:14.137213Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-12T00:34:14.137119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-12T00:34:14.137346Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-12T00:34:14.13751Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-12T00:34:14.13756Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-12T00:34:14.139437Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-12T00:34:14.139622Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-12T00:34:14.139654Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-12T00:34:14.139749Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-12T00:34:14.139783Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-12T00:34:15.529615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-12T00:34:15.529657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-12T00:34:15.529693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-12T00:34:15.529707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-12T00:34:15.529712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-12T00:34:15.529721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-12T00:34:15.529727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-12T00:34:15.530855Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-12T00:34:15.530872Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-12T00:34:15.530861Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-743506 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-12T00:34:15.531112Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-12T00:34:15.531138Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-12T00:34:15.532158Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-12T00:34:15.532157Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 00:34:49 up  1:17,  0 user,  load average: 5.08, 3.09, 1.92
	Linux old-k8s-version-743506 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d5c10c053a2306b36bed7e95ea22c18e8c71e4916805105400cca4e715b39675] <==
	I1212 00:34:17.167090       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 00:34:17.167332       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1212 00:34:17.167449       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:34:17.167469       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:34:17.167502       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:34:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:34:17.364837       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:34:17.364866       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:34:17.364878       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:34:17.365029       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 00:34:17.665061       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:34:17.665108       1 metrics.go:72] Registering metrics
	I1212 00:34:17.665181       1 controller.go:711] "Syncing nftables rules"
	I1212 00:34:27.365159       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:34:27.365205       1 main.go:301] handling current node
	I1212 00:34:37.365383       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:34:37.365422       1 main.go:301] handling current node
	I1212 00:34:47.364972       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:34:47.364999       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a0ad080a093ddeaa725b3aa7bd29e92715f2fa158214966408422908cd7efbce] <==
	I1212 00:34:16.440290       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1212 00:34:16.538966       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 00:34:16.538998       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 00:34:16.539132       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 00:34:16.539151       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 00:34:16.539235       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 00:34:16.539286       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 00:34:16.539298       1 aggregator.go:166] initial CRD sync complete...
	I1212 00:34:16.539308       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 00:34:16.539314       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 00:34:16.539322       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:34:16.540162       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 00:34:16.576233       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:34:16.582964       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 00:34:17.324593       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 00:34:17.351092       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 00:34:17.370151       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:34:17.376143       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:34:17.381629       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 00:34:17.412112       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.70.116"}
	I1212 00:34:17.424557       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.171.209"}
	I1212 00:34:17.442604       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 00:34:29.556336       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 00:34:29.703813       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1212 00:34:29.804490       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [17479f6c2196c0e03f5729c15203a934ede701786b535f944214967080179be1] <==
	I1212 00:34:29.508642       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 00:34:29.573290       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 00:34:29.706416       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1212 00:34:29.707766       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1212 00:34:29.911201       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-jhg57"
	I1212 00:34:29.911726       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-r64gn"
	I1212 00:34:29.920281       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="214.079708ms"
	I1212 00:34:29.920652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="213.58192ms"
	I1212 00:34:29.930797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.470273ms"
	I1212 00:34:29.931416       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.716135ms"
	I1212 00:34:29.931571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.821µs"
	I1212 00:34:29.932394       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 00:34:29.942864       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.384µs"
	I1212 00:34:29.943629       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.788692ms"
	I1212 00:34:29.943752       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.812µs"
	I1212 00:34:29.955853       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 00:34:29.955984       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 00:34:34.735771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="22.54929ms"
	I1212 00:34:34.737087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="229.703µs"
	I1212 00:34:36.692277       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.060824ms"
	I1212 00:34:36.692512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.565µs"
	I1212 00:34:37.696127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="6.826966ms"
	I1212 00:34:37.696249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.679µs"
	I1212 00:34:38.702078       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.686µs"
	I1212 00:34:39.699100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.854µs"
	
	
	==> kube-proxy [46ced247bb6cb9df1846e17c8e1a5267a02c76daf3d5721b9ab21a684f9f59d7] <==
	I1212 00:34:16.985664       1 server_others.go:69] "Using iptables proxy"
	I1212 00:34:16.997866       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1212 00:34:17.017237       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:34:17.019505       1 server_others.go:152] "Using iptables Proxier"
	I1212 00:34:17.019534       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 00:34:17.019545       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 00:34:17.019569       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 00:34:17.019808       1 server.go:846] "Version info" version="v1.28.0"
	I1212 00:34:17.019823       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:34:17.020393       1 config.go:97] "Starting endpoint slice config controller"
	I1212 00:34:17.020431       1 config.go:315] "Starting node config controller"
	I1212 00:34:17.020435       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 00:34:17.020450       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:34:17.020503       1 config.go:188] "Starting service config controller"
	I1212 00:34:17.020512       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 00:34:17.121021       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:34:17.121047       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 00:34:17.121145       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d463fe18198b237fef4bf76765fe49362a2634f1272c155ffbf6c2967f301bf9] <==
	I1212 00:34:14.631509       1 serving.go:348] Generated self-signed cert in-memory
	W1212 00:34:16.474280       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 00:34:16.474328       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 00:34:16.474348       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 00:34:16.474360       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 00:34:16.492463       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1212 00:34:16.492512       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:34:16.494159       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:34:16.494208       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:34:16.495263       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1212 00:34:16.496352       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 00:34:16.594980       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 12 00:34:20 old-k8s-version-743506 kubelet[734]: E1212 00:34:20.187275     734 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 00:34:20 old-k8s-version-743506 kubelet[734]: E1212 00:34:20.187374     734 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e73711a2-208b-41a6-a47f-6253638cfdf2-config-volume podName:e73711a2-208b-41a6-a47f-6253638cfdf2 nodeName:}" failed. No retries permitted until 2025-12-12 00:34:24.187352984 +0000 UTC m=+10.684196781 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e73711a2-208b-41a6-a47f-6253638cfdf2-config-volume") pod "coredns-5dd5756b68-nxwdc" (UID: "e73711a2-208b-41a6-a47f-6253638cfdf2") : object "kube-system"/"coredns" not registered
	Dec 12 00:34:20 old-k8s-version-743506 kubelet[734]: E1212 00:34:20.288076     734 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 12 00:34:20 old-k8s-version-743506 kubelet[734]: E1212 00:34:20.288110     734 projected.go:198] Error preparing data for projected volume kube-api-access-72247 for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Dec 12 00:34:20 old-k8s-version-743506 kubelet[734]: E1212 00:34:20.288182     734 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a0e8330-9dea-4063-9369-234ee8e6ef43-kube-api-access-72247 podName:1a0e8330-9dea-4063-9369-234ee8e6ef43 nodeName:}" failed. No retries permitted until 2025-12-12 00:34:24.288164931 +0000 UTC m=+10.785008723 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-72247" (UniqueName: "kubernetes.io/projected/1a0e8330-9dea-4063-9369-234ee8e6ef43-kube-api-access-72247") pod "busybox" (UID: "1a0e8330-9dea-4063-9369-234ee8e6ef43") : object "default"/"kube-root-ca.crt" not registered
	Dec 12 00:34:29 old-k8s-version-743506 kubelet[734]: I1212 00:34:29.919688     734 topology_manager.go:215] "Topology Admit Handler" podUID="d0734f6b-f43b-4c8f-a510-cb132816b525" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-jhg57"
	Dec 12 00:34:29 old-k8s-version-743506 kubelet[734]: I1212 00:34:29.922204     734 topology_manager.go:215] "Topology Admit Handler" podUID="f6af7f86-6269-45c7-9d04-9157687f0860" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-r64gn"
	Dec 12 00:34:29 old-k8s-version-743506 kubelet[734]: I1212 00:34:29.947235     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f6af7f86-6269-45c7-9d04-9157687f0860-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-r64gn\" (UID: \"f6af7f86-6269-45c7-9d04-9157687f0860\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn"
	Dec 12 00:34:29 old-k8s-version-743506 kubelet[734]: I1212 00:34:29.947296     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwgmj\" (UniqueName: \"kubernetes.io/projected/f6af7f86-6269-45c7-9d04-9157687f0860-kube-api-access-cwgmj\") pod \"dashboard-metrics-scraper-5f989dc9cf-r64gn\" (UID: \"f6af7f86-6269-45c7-9d04-9157687f0860\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn"
	Dec 12 00:34:29 old-k8s-version-743506 kubelet[734]: I1212 00:34:29.947556     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d0734f6b-f43b-4c8f-a510-cb132816b525-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-jhg57\" (UID: \"d0734f6b-f43b-4c8f-a510-cb132816b525\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jhg57"
	Dec 12 00:34:29 old-k8s-version-743506 kubelet[734]: I1212 00:34:29.947652     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7cg7\" (UniqueName: \"kubernetes.io/projected/d0734f6b-f43b-4c8f-a510-cb132816b525-kube-api-access-p7cg7\") pod \"kubernetes-dashboard-8694d4445c-jhg57\" (UID: \"d0734f6b-f43b-4c8f-a510-cb132816b525\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jhg57"
	Dec 12 00:34:36 old-k8s-version-743506 kubelet[734]: I1212 00:34:36.682525     734 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn" podStartSLOduration=1.315146408 podCreationTimestamp="2025-12-12 00:34:29 +0000 UTC" firstStartedPulling="2025-12-12 00:34:30.247716769 +0000 UTC m=+16.744560557" lastFinishedPulling="2025-12-12 00:34:36.615008153 +0000 UTC m=+23.111851935" observedRunningTime="2025-12-12 00:34:36.680521682 +0000 UTC m=+23.177365483" watchObservedRunningTime="2025-12-12 00:34:36.682437786 +0000 UTC m=+23.179281620"
	Dec 12 00:34:36 old-k8s-version-743506 kubelet[734]: I1212 00:34:36.682866     734 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jhg57" podStartSLOduration=4.241625533 podCreationTimestamp="2025-12-12 00:34:29 +0000 UTC" firstStartedPulling="2025-12-12 00:34:30.244825772 +0000 UTC m=+16.741669560" lastFinishedPulling="2025-12-12 00:34:33.686028948 +0000 UTC m=+20.182872732" observedRunningTime="2025-12-12 00:34:34.706537696 +0000 UTC m=+21.203381490" watchObservedRunningTime="2025-12-12 00:34:36.682828705 +0000 UTC m=+23.179672504"
	Dec 12 00:34:37 old-k8s-version-743506 kubelet[734]: I1212 00:34:37.673779     734 scope.go:117] "RemoveContainer" containerID="23563703b720516cdc853ac623892f0e3d24ba7876e0a91d65dc37db5bf74036"
	Dec 12 00:34:38 old-k8s-version-743506 kubelet[734]: I1212 00:34:38.681378     734 scope.go:117] "RemoveContainer" containerID="23563703b720516cdc853ac623892f0e3d24ba7876e0a91d65dc37db5bf74036"
	Dec 12 00:34:38 old-k8s-version-743506 kubelet[734]: I1212 00:34:38.681738     734 scope.go:117] "RemoveContainer" containerID="e903d75d8008fddf71158194c0a15dd7caaba9ceb3c693696d81f596d545c2af"
	Dec 12 00:34:38 old-k8s-version-743506 kubelet[734]: E1212 00:34:38.682114     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r64gn_kubernetes-dashboard(f6af7f86-6269-45c7-9d04-9157687f0860)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn" podUID="f6af7f86-6269-45c7-9d04-9157687f0860"
	Dec 12 00:34:39 old-k8s-version-743506 kubelet[734]: I1212 00:34:39.686109     734 scope.go:117] "RemoveContainer" containerID="e903d75d8008fddf71158194c0a15dd7caaba9ceb3c693696d81f596d545c2af"
	Dec 12 00:34:39 old-k8s-version-743506 kubelet[734]: E1212 00:34:39.686510     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r64gn_kubernetes-dashboard(f6af7f86-6269-45c7-9d04-9157687f0860)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn" podUID="f6af7f86-6269-45c7-9d04-9157687f0860"
	Dec 12 00:34:40 old-k8s-version-743506 kubelet[734]: I1212 00:34:40.688457     734 scope.go:117] "RemoveContainer" containerID="e903d75d8008fddf71158194c0a15dd7caaba9ceb3c693696d81f596d545c2af"
	Dec 12 00:34:40 old-k8s-version-743506 kubelet[734]: E1212 00:34:40.688959     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r64gn_kubernetes-dashboard(f6af7f86-6269-45c7-9d04-9157687f0860)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn" podUID="f6af7f86-6269-45c7-9d04-9157687f0860"
	Dec 12 00:34:46 old-k8s-version-743506 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 00:34:46 old-k8s-version-743506 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 00:34:46 old-k8s-version-743506 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:34:46 old-k8s-version-743506 systemd[1]: kubelet.service: Consumed 1.106s CPU time.
	
	
	==> kubernetes-dashboard [c00ac9a4c0b0a4a61b6f5dcab16513d50e33b4ba5166606551722a4571ca2420] <==
	2025/12/12 00:34:33 Starting overwatch
	2025/12/12 00:34:33 Using namespace: kubernetes-dashboard
	2025/12/12 00:34:33 Using in-cluster config to connect to apiserver
	2025/12/12 00:34:33 Using secret token for csrf signing
	2025/12/12 00:34:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/12 00:34:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/12 00:34:33 Successful initial request to the apiserver, version: v1.28.0
	2025/12/12 00:34:33 Generating JWE encryption key
	2025/12/12 00:34:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/12 00:34:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/12 00:34:34 Initializing JWE encryption key from synchronized object
	2025/12/12 00:34:34 Creating in-cluster Sidecar client
	2025/12/12 00:34:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 00:34:34 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [40f0aa5d7111ad953f0cf93f67a62cb204d4fc074605fd5ab577a94dc6d2d0a2] <==
	I1212 00:34:16.947897       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 00:34:46.950297       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-743506 -n old-k8s-version-743506
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-743506 -n old-k8s-version-743506: exit status 2 (321.339815ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-743506 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-743506
helpers_test.go:244: (dbg) docker inspect old-k8s-version-743506:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671",
	        "Created": "2025-12-12T00:32:56.81457716Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 287954,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:34:07.678379993Z",
	            "FinishedAt": "2025-12-12T00:34:06.851652039Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671/hosts",
	        "LogPath": "/var/lib/docker/containers/e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671/e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671-json.log",
	        "Name": "/old-k8s-version-743506",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-743506:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-743506",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6e7fe2ace92e9460c7482fec928718aee10642e91d2cd0d950b372770e1d671",
	                "LowerDir": "/var/lib/docker/overlay2/19303aab7b27fd5dd820b5a2006a30ec77cb6cfa1aae06803e3c9f01f549c5bc-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/19303aab7b27fd5dd820b5a2006a30ec77cb6cfa1aae06803e3c9f01f549c5bc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/19303aab7b27fd5dd820b5a2006a30ec77cb6cfa1aae06803e3c9f01f549c5bc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/19303aab7b27fd5dd820b5a2006a30ec77cb6cfa1aae06803e3c9f01f549c5bc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-743506",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-743506/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-743506",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-743506",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-743506",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a8a5e281f0693fd937fc65881ff362ae373436321f2d0f37d356440a11775c45",
	            "SandboxKey": "/var/run/docker/netns/a8a5e281f069",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-743506": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cdcdaa73e4279f08edc884bf6d2244b7a4df03294612c4ea8561dd87e0d0ec16",
	                    "EndpointID": "20afcfc40aa2874517ac090d98642d677f57e757ff0662907dcd84f48a8823bb",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "d2:29:49:e6:27:6b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-743506",
	                        "e6e7fe2ace92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-743506 -n old-k8s-version-743506
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-743506 -n old-k8s-version-743506: exit status 2 (326.574258ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-743506 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-743506 logs -n 25: (1.034879155s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ delete  │ -p stopped-upgrade-148693                                                                                                                                                                                                                     │ stopped-upgrade-148693 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p cert-options-319518 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-319518    │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ start   │ -p cert-expiration-673665 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-673665 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ delete  │ -p running-upgrade-299658                                                                                                                                                                                                                     │ running-upgrade-299658 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ delete  │ -p cert-expiration-673665                                                                                                                                                                                                                     │ cert-expiration-673665 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ cert-options-319518 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-319518    │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ -p cert-options-319518 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-319518    │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ delete  │ -p cert-options-319518                                                                                                                                                                                                                        │ cert-options-319518    │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-858659     │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-743506 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ stop    │ -p old-k8s-version-743506 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable metrics-server -p no-preload-675290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ stop    │ -p no-preload-675290 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-858659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-858659     │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ stop    │ -p embed-certs-858659 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-858659     │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-743506 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p no-preload-675290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-858659 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-858659     │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-858659     │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ image   │ old-k8s-version-743506 image list --format=json                                                                                                                                                                                               │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p old-k8s-version-743506 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:34:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:34:22.194991  292217 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:34:22.195276  292217 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:34:22.195286  292217 out.go:374] Setting ErrFile to fd 2...
	I1212 00:34:22.195290  292217 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:34:22.195461  292217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:34:22.195951  292217 out.go:368] Setting JSON to false
	I1212 00:34:22.197248  292217 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4608,"bootTime":1765495054,"procs":335,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:34:22.197307  292217 start.go:143] virtualization: kvm guest
	I1212 00:34:22.199590  292217 out.go:179] * [embed-certs-858659] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:34:22.200695  292217 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:34:22.200770  292217 notify.go:221] Checking for updates...
	I1212 00:34:22.202938  292217 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:34:22.205663  292217 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:34:22.207024  292217 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:34:22.208159  292217 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:34:22.209335  292217 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:34:22.211048  292217 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:34:22.211868  292217 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:34:22.238149  292217 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:34:22.238284  292217 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:34:22.300345  292217 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-12 00:34:22.28954278 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:34:22.300452  292217 docker.go:319] overlay module found
	I1212 00:34:22.302189  292217 out.go:179] * Using the docker driver based on existing profile
	I1212 00:34:22.303236  292217 start.go:309] selected driver: docker
	I1212 00:34:22.303252  292217 start.go:927] validating driver "docker" against &{Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:34:22.303357  292217 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:34:22.304085  292217 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:34:22.364908  292217 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-12 00:34:22.355220994 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:34:22.365186  292217 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:34:22.365208  292217 cni.go:84] Creating CNI manager for ""
	I1212 00:34:22.365254  292217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:34:22.365282  292217 start.go:353] cluster config:
	{Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:34:22.367044  292217 out.go:179] * Starting "embed-certs-858659" primary control-plane node in "embed-certs-858659" cluster
	I1212 00:34:22.368095  292217 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:34:22.369191  292217 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:34:22.370219  292217 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:34:22.370254  292217 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 00:34:22.370268  292217 cache.go:65] Caching tarball of preloaded images
	I1212 00:34:22.370312  292217 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:34:22.370362  292217 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:34:22.370377  292217 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 00:34:22.370514  292217 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/config.json ...
	I1212 00:34:22.391729  292217 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:34:22.391750  292217 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:34:22.391769  292217 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:34:22.391800  292217 start.go:360] acquireMachinesLock for embed-certs-858659: {Name:mk65733daa8eb01c9a3ad2d27b0888c2a1a8b319 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:34:22.391881  292217 start.go:364] duration metric: took 47.626µs to acquireMachinesLock for "embed-certs-858659"
	I1212 00:34:22.391906  292217 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:34:22.391912  292217 fix.go:54] fixHost starting: 
	I1212 00:34:22.392190  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:22.410761  292217 fix.go:112] recreateIfNeeded on embed-certs-858659: state=Stopped err=<nil>
	W1212 00:34:22.410787  292217 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:34:17.930547  287750 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:34:17.934787  287750 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1212 00:34:17.935919  287750 api_server.go:141] control plane version: v1.28.0
	I1212 00:34:17.935939  287750 api_server.go:131] duration metric: took 506.401624ms to wait for apiserver health ...
	I1212 00:34:17.935961  287750 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:34:17.939387  287750 system_pods.go:59] 8 kube-system pods found
	I1212 00:34:17.939432  287750 system_pods.go:61] "coredns-5dd5756b68-nxwdc" [e73711a2-208b-41a6-a47f-6253638cfdf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:17.939448  287750 system_pods.go:61] "etcd-old-k8s-version-743506" [945b4dc3-f44c-48ec-8a03-ec0d012b5e09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:17.939461  287750 system_pods.go:61] "kindnet-s2gvw" [21d0881b-1da3-4a1d-967d-8f108d5d8a1f] Running
	I1212 00:34:17.939470  287750 system_pods.go:61] "kube-apiserver-old-k8s-version-743506" [13ddd011-c962-4b44-8b26-b80bf8df1e13] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:17.939494  287750 system_pods.go:61] "kube-controller-manager-old-k8s-version-743506" [3df133c8-f899-4f35-b4bf-d1849a7262e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:17.939506  287750 system_pods.go:61] "kube-proxy-pz8kt" [671c52f4-19ce-4b7b-8e97-e43f64cd4aeb] Running
	I1212 00:34:17.939514  287750 system_pods.go:61] "kube-scheduler-old-k8s-version-743506" [3e6b9a7d-5346-47dd-af74-f9f8b6904163] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:17.939523  287750 system_pods.go:61] "storage-provisioner" [ccc4d4a0-b9c6-4653-90dc-113128acc782] Running
	I1212 00:34:17.939531  287750 system_pods.go:74] duration metric: took 3.56333ms to wait for pod list to return data ...
	I1212 00:34:17.939542  287750 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:34:17.941406  287750 default_sa.go:45] found service account: "default"
	I1212 00:34:17.941423  287750 default_sa.go:55] duration metric: took 1.872906ms for default service account to be created ...
	I1212 00:34:17.941431  287750 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:34:17.944007  287750 system_pods.go:86] 8 kube-system pods found
	I1212 00:34:17.944034  287750 system_pods.go:89] "coredns-5dd5756b68-nxwdc" [e73711a2-208b-41a6-a47f-6253638cfdf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:17.944044  287750 system_pods.go:89] "etcd-old-k8s-version-743506" [945b4dc3-f44c-48ec-8a03-ec0d012b5e09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:17.944052  287750 system_pods.go:89] "kindnet-s2gvw" [21d0881b-1da3-4a1d-967d-8f108d5d8a1f] Running
	I1212 00:34:17.944060  287750 system_pods.go:89] "kube-apiserver-old-k8s-version-743506" [13ddd011-c962-4b44-8b26-b80bf8df1e13] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:17.944069  287750 system_pods.go:89] "kube-controller-manager-old-k8s-version-743506" [3df133c8-f899-4f35-b4bf-d1849a7262e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:17.944079  287750 system_pods.go:89] "kube-proxy-pz8kt" [671c52f4-19ce-4b7b-8e97-e43f64cd4aeb] Running
	I1212 00:34:17.944088  287750 system_pods.go:89] "kube-scheduler-old-k8s-version-743506" [3e6b9a7d-5346-47dd-af74-f9f8b6904163] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:17.944094  287750 system_pods.go:89] "storage-provisioner" [ccc4d4a0-b9c6-4653-90dc-113128acc782] Running
	I1212 00:34:17.944105  287750 system_pods.go:126] duration metric: took 2.668947ms to wait for k8s-apps to be running ...
	I1212 00:34:17.944116  287750 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:34:17.944183  287750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:34:17.956567  287750 system_svc.go:56] duration metric: took 12.447031ms WaitForService to wait for kubelet
	I1212 00:34:17.956589  287750 kubeadm.go:587] duration metric: took 3.757690609s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:34:17.956609  287750 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:34:17.958554  287750 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:34:17.958575  287750 node_conditions.go:123] node cpu capacity is 8
	I1212 00:34:17.958593  287750 node_conditions.go:105] duration metric: took 1.974354ms to run NodePressure ...
	I1212 00:34:17.958607  287750 start.go:242] waiting for startup goroutines ...
	I1212 00:34:17.958622  287750 start.go:247] waiting for cluster config update ...
	I1212 00:34:17.958640  287750 start.go:256] writing updated cluster config ...
	I1212 00:34:17.958881  287750 ssh_runner.go:195] Run: rm -f paused
	I1212 00:34:17.962438  287750 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:34:17.965710  287750 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-nxwdc" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 00:34:19.971812  287750 pod_ready.go:104] pod "coredns-5dd5756b68-nxwdc" is not "Ready", error: node "old-k8s-version-743506" hosting pod "coredns-5dd5756b68-nxwdc" is not "Ready" (will retry)
	W1212 00:34:21.972884  287750 pod_ready.go:104] pod "coredns-5dd5756b68-nxwdc" is not "Ready", error: node "old-k8s-version-743506" hosting pod "coredns-5dd5756b68-nxwdc" is not "Ready" (will retry)
	I1212 00:34:21.825329  290093 cli_runner.go:164] Run: docker container inspect no-preload-675290 --format={{.State.Status}}
	I1212 00:34:21.826261  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 00:34:21.826281  290093 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 00:34:21.826370  290093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-675290
	I1212 00:34:21.849641  290093 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:34:21.849646  290093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/no-preload-675290/id_rsa Username:docker}
	I1212 00:34:21.849664  290093 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:34:21.849826  290093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-675290
	I1212 00:34:21.851877  290093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/no-preload-675290/id_rsa Username:docker}
	I1212 00:34:21.890953  290093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/no-preload-675290/id_rsa Username:docker}
	I1212 00:34:21.983765  290093 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:34:21.984639  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 00:34:21.984658  290093 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 00:34:21.992024  290093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:34:22.003673  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 00:34:22.003694  290093 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 00:34:22.005666  290093 node_ready.go:35] waiting up to 6m0s for node "no-preload-675290" to be "Ready" ...
	I1212 00:34:22.016144  290093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:34:22.022417  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 00:34:22.022439  290093 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 00:34:22.043609  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 00:34:22.043666  290093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 00:34:22.064203  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 00:34:22.064230  290093 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 00:34:22.081605  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 00:34:22.081626  290093 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 00:34:22.098674  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 00:34:22.098715  290093 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 00:34:22.115963  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 00:34:22.115987  290093 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 00:34:22.132839  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:34:22.132864  290093 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 00:34:22.148773  290093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:34:22.954440  290093 node_ready.go:49] node "no-preload-675290" is "Ready"
	I1212 00:34:22.954493  290093 node_ready.go:38] duration metric: took 948.786308ms for node "no-preload-675290" to be "Ready" ...
	I1212 00:34:22.954513  290093 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:34:22.954568  290093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:34:23.460200  290093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.468149091s)
	I1212 00:34:23.460313  290093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.444116968s)
	I1212 00:34:23.460400  290093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.311593953s)
	I1212 00:34:23.460429  290093 api_server.go:72] duration metric: took 1.662216059s to wait for apiserver process to appear ...
	I1212 00:34:23.460441  290093 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:34:23.460498  290093 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:34:23.461985  290093 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-675290 addons enable metrics-server
	
	I1212 00:34:23.465588  290093 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:34:23.465612  290093 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:34:23.466996  290093 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1212 00:34:23.468160  290093 addons.go:530] duration metric: took 1.669912774s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1212 00:34:23.961381  290093 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:34:23.967063  290093 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:34:23.967088  290093 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:34:24.460610  290093 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:34:24.465379  290093 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 00:34:24.466274  290093 api_server.go:141] control plane version: v1.35.0-beta.0
	I1212 00:34:24.466296  290093 api_server.go:131] duration metric: took 1.005848918s to wait for apiserver health ...
	I1212 00:34:24.466304  290093 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:34:24.469960  290093 system_pods.go:59] 8 kube-system pods found
	I1212 00:34:24.470002  290093 system_pods.go:61] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:24.470020  290093 system_pods.go:61] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:24.470030  290093 system_pods.go:61] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:34:24.470042  290093 system_pods.go:61] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:24.470052  290093 system_pods.go:61] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:24.470065  290093 system_pods.go:61] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:34:24.470075  290093 system_pods.go:61] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:24.470083  290093 system_pods.go:61] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Running
	I1212 00:34:24.470095  290093 system_pods.go:74] duration metric: took 3.783504ms to wait for pod list to return data ...
	I1212 00:34:24.470107  290093 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:34:24.472424  290093 default_sa.go:45] found service account: "default"
	I1212 00:34:24.472446  290093 default_sa.go:55] duration metric: took 2.32759ms for default service account to be created ...
	I1212 00:34:24.472455  290093 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:34:24.474765  290093 system_pods.go:86] 8 kube-system pods found
	I1212 00:34:24.474789  290093 system_pods.go:89] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:24.474797  290093 system_pods.go:89] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:24.474802  290093 system_pods.go:89] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:34:24.474807  290093 system_pods.go:89] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:24.474812  290093 system_pods.go:89] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:24.474819  290093 system_pods.go:89] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:34:24.474824  290093 system_pods.go:89] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:24.474828  290093 system_pods.go:89] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Running
	I1212 00:34:24.474837  290093 system_pods.go:126] duration metric: took 2.375958ms to wait for k8s-apps to be running ...
	I1212 00:34:24.474842  290093 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:34:24.474880  290093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:34:24.487107  290093 system_svc.go:56] duration metric: took 12.259625ms WaitForService to wait for kubelet
	I1212 00:34:24.487123  290093 kubeadm.go:587] duration metric: took 2.688911743s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:34:24.487151  290093 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:34:24.489298  290093 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:34:24.489318  290093 node_conditions.go:123] node cpu capacity is 8
	I1212 00:34:24.489333  290093 node_conditions.go:105] duration metric: took 2.175894ms to run NodePressure ...
	I1212 00:34:24.489343  290093 start.go:242] waiting for startup goroutines ...
	I1212 00:34:24.489352  290093 start.go:247] waiting for cluster config update ...
	I1212 00:34:24.489362  290093 start.go:256] writing updated cluster config ...
	I1212 00:34:24.489611  290093 ssh_runner.go:195] Run: rm -f paused
	I1212 00:34:24.493607  290093 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:34:24.496951  290093 pod_ready.go:83] waiting for pod "coredns-7d764666f9-44t4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:22.412332  292217 out.go:252] * Restarting existing docker container for "embed-certs-858659" ...
	I1212 00:34:22.412393  292217 cli_runner.go:164] Run: docker start embed-certs-858659
	I1212 00:34:22.675395  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:22.697554  292217 kic.go:430] container "embed-certs-858659" state is running.
	I1212 00:34:22.698003  292217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:34:22.721205  292217 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/config.json ...
	I1212 00:34:22.721434  292217 machine.go:94] provisionDockerMachine start ...
	I1212 00:34:22.721530  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:22.740223  292217 main.go:143] libmachine: Using SSH client type: native
	I1212 00:34:22.740531  292217 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1212 00:34:22.740552  292217 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:34:22.741123  292217 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48100->127.0.0.1:33083: read: connection reset by peer
	I1212 00:34:25.873905  292217 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-858659
	
	I1212 00:34:25.873938  292217 ubuntu.go:182] provisioning hostname "embed-certs-858659"
	I1212 00:34:25.874010  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:25.891640  292217 main.go:143] libmachine: Using SSH client type: native
	I1212 00:34:25.891843  292217 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1212 00:34:25.891854  292217 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-858659 && echo "embed-certs-858659" | sudo tee /etc/hostname
	I1212 00:34:26.033680  292217 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-858659
	
	I1212 00:34:26.033749  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:26.054661  292217 main.go:143] libmachine: Using SSH client type: native
	I1212 00:34:26.054969  292217 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1212 00:34:26.055001  292217 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-858659' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-858659/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-858659' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:34:26.193045  292217 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:34:26.193085  292217 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:34:26.193135  292217 ubuntu.go:190] setting up certificates
	I1212 00:34:26.193149  292217 provision.go:84] configureAuth start
	I1212 00:34:26.193222  292217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:34:26.210729  292217 provision.go:143] copyHostCerts
	I1212 00:34:26.210790  292217 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:34:26.210805  292217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:34:26.210864  292217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:34:26.211018  292217 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:34:26.211030  292217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:34:26.211064  292217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:34:26.211138  292217 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:34:26.211145  292217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:34:26.211176  292217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:34:26.211239  292217 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.embed-certs-858659 san=[127.0.0.1 192.168.94.2 embed-certs-858659 localhost minikube]
	I1212 00:34:26.334330  292217 provision.go:177] copyRemoteCerts
	I1212 00:34:26.334387  292217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:34:26.334432  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:26.352293  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:26.448550  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:34:26.465534  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:34:26.482790  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:34:26.500628  292217 provision.go:87] duration metric: took 307.45892ms to configureAuth
	I1212 00:34:26.500654  292217 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:34:26.500854  292217 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:34:26.500972  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:26.518572  292217 main.go:143] libmachine: Using SSH client type: native
	I1212 00:34:26.518811  292217 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1212 00:34:26.518834  292217 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:34:26.850738  292217 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:34:26.850803  292217 machine.go:97] duration metric: took 4.12935252s to provisionDockerMachine
	I1212 00:34:26.850819  292217 start.go:293] postStartSetup for "embed-certs-858659" (driver="docker")
	I1212 00:34:26.850842  292217 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:34:26.850914  292217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:34:26.850984  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:26.871065  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:26.966453  292217 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:34:26.970137  292217 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:34:26.970162  292217 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:34:26.970172  292217 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:34:26.970227  292217 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:34:26.970325  292217 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:34:26.970442  292217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:34:26.978705  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:34:26.995716  292217 start.go:296] duration metric: took 144.870061ms for postStartSetup
	I1212 00:34:26.995782  292217 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:34:26.995835  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:27.014285  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:27.105922  292217 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:34:27.110345  292217 fix.go:56] duration metric: took 4.718428372s for fixHost
	I1212 00:34:27.110371  292217 start.go:83] releasing machines lock for "embed-certs-858659", held for 4.718475367s
	I1212 00:34:27.110437  292217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:34:27.127373  292217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:34:27.127396  292217 ssh_runner.go:195] Run: cat /version.json
	I1212 00:34:27.127437  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:27.127445  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:27.144516  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:27.145862  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	W1212 00:34:24.471877  287750 pod_ready.go:104] pod "coredns-5dd5756b68-nxwdc" is not "Ready", error: node "old-k8s-version-743506" hosting pod "coredns-5dd5756b68-nxwdc" is not "Ready" (will retry)
	I1212 00:34:26.971400  287750 pod_ready.go:94] pod "coredns-5dd5756b68-nxwdc" is "Ready"
	I1212 00:34:26.971427  287750 pod_ready.go:86] duration metric: took 9.005696281s for pod "coredns-5dd5756b68-nxwdc" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:26.974242  287750 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:27.236084  292217 ssh_runner.go:195] Run: systemctl --version
	I1212 00:34:27.289011  292217 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:34:27.320985  292217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:34:27.325714  292217 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:34:27.325777  292217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:34:27.334535  292217 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:34:27.334554  292217 start.go:496] detecting cgroup driver to use...
	I1212 00:34:27.334579  292217 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:34:27.334633  292217 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:34:27.348435  292217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:34:27.359652  292217 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:34:27.359703  292217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:34:27.374109  292217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:34:27.386469  292217 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:34:27.460054  292217 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:34:27.535041  292217 docker.go:234] disabling docker service ...
	I1212 00:34:27.535088  292217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:34:27.548165  292217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:34:27.559573  292217 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:34:27.632790  292217 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:34:27.713620  292217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:34:27.725354  292217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:34:27.738210  292217 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:34:27.738258  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.746385  292217 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:34:27.746427  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.754504  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.762234  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.770331  292217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:34:27.777754  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.785740  292217 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.793156  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.800928  292217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:34:27.807604  292217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:34:27.814250  292217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:34:27.892059  292217 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:34:28.023411  292217 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:34:28.023508  292217 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:34:28.027324  292217 start.go:564] Will wait 60s for crictl version
	I1212 00:34:28.027377  292217 ssh_runner.go:195] Run: which crictl
	I1212 00:34:28.030733  292217 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:34:28.053419  292217 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:34:28.053521  292217 ssh_runner.go:195] Run: crio --version
	I1212 00:34:28.078218  292217 ssh_runner.go:195] Run: crio --version
	I1212 00:34:28.104986  292217 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:34:28.106118  292217 cli_runner.go:164] Run: docker network inspect embed-certs-858659 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:34:28.122845  292217 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1212 00:34:28.127252  292217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:34:28.136929  292217 kubeadm.go:884] updating cluster {Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:34:28.137027  292217 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:34:28.137068  292217 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:34:28.166606  292217 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:34:28.166625  292217 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:34:28.166660  292217 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:34:28.189997  292217 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:34:28.190014  292217 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:34:28.190022  292217 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1212 00:34:28.190122  292217 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-858659 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:34:28.190197  292217 ssh_runner.go:195] Run: crio config
	I1212 00:34:28.233455  292217 cni.go:84] Creating CNI manager for ""
	I1212 00:34:28.233501  292217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:34:28.233520  292217 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:34:28.233549  292217 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-858659 NodeName:embed-certs-858659 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:34:28.233667  292217 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-858659"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:34:28.233728  292217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:34:28.241238  292217 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:34:28.241286  292217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:34:28.248529  292217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1212 00:34:28.259931  292217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:34:28.272064  292217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1212 00:34:28.283961  292217 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:34:28.287295  292217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:34:28.296174  292217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:34:28.374313  292217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:34:28.399445  292217 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659 for IP: 192.168.94.2
	I1212 00:34:28.399469  292217 certs.go:195] generating shared ca certs ...
	I1212 00:34:28.399502  292217 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:28.399682  292217 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:34:28.399740  292217 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:34:28.399754  292217 certs.go:257] generating profile certs ...
	I1212 00:34:28.399858  292217 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.key
	I1212 00:34:28.399921  292217 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key.89584afc
	I1212 00:34:28.399969  292217 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key
	I1212 00:34:28.400101  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:34:28.400154  292217 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:34:28.400167  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:34:28.400199  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:34:28.400232  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:34:28.400265  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:34:28.400324  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:34:28.401140  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:34:28.418445  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:34:28.435360  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:34:28.454053  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:34:28.476744  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 00:34:28.494464  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:34:28.512911  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:34:28.532124  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:34:28.550167  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:34:28.569797  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:34:28.588171  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:34:28.604980  292217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:34:28.616294  292217 ssh_runner.go:195] Run: openssl version
	I1212 00:34:28.621964  292217 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:34:28.628771  292217 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:34:28.635451  292217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:34:28.638885  292217 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:34:28.638945  292217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:34:28.673421  292217 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:34:28.680066  292217 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:28.686678  292217 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:34:28.693326  292217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:28.696643  292217 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:28.696676  292217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:28.730803  292217 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:34:28.737734  292217 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:34:28.744631  292217 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:34:28.751977  292217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:34:28.755400  292217 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:34:28.755447  292217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:34:28.789571  292217 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:34:28.796415  292217 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:34:28.799981  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:34:28.833241  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:34:28.867120  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:34:28.906082  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:34:28.954312  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:34:28.999466  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:34:29.061379  292217 kubeadm.go:401] StartCluster: {Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:34:29.061506  292217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:34:29.061568  292217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:34:29.094454  292217 cri.go:89] found id: "6eb73517ebee81333693231ad0296e35a758eeb5aa9cc45c7ee7663ea6e652e4"
	I1212 00:34:29.094504  292217 cri.go:89] found id: "07f4a35d8d4d1bc19d0c0dc6b015f381bc482dc379a9a416e57528498aa8e1e5"
	I1212 00:34:29.094512  292217 cri.go:89] found id: "a4dfa21dd3b082f3a9464428da14520464aa97d1551c9a8ddffbf56d063877a6"
	I1212 00:34:29.094516  292217 cri.go:89] found id: "3da8c9c634a0528028b8a0875544b6a0c41e72dc8bf1ff1da95beccf80094376"
	I1212 00:34:29.094520  292217 cri.go:89] found id: ""
	I1212 00:34:29.094580  292217 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 00:34:29.106285  292217 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:34:29Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:34:29.106353  292217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:34:29.113726  292217 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:34:29.113742  292217 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:34:29.113783  292217 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:34:29.120655  292217 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:34:29.121366  292217 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-858659" does not appear in /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:34:29.121810  292217 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-10975/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-858659" cluster setting kubeconfig missing "embed-certs-858659" context setting]
	I1212 00:34:29.122410  292217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:29.123956  292217 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:34:29.131575  292217 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1212 00:34:29.131601  292217 kubeadm.go:602] duration metric: took 17.853493ms to restartPrimaryControlPlane
	I1212 00:34:29.131610  292217 kubeadm.go:403] duration metric: took 70.240665ms to StartCluster
	I1212 00:34:29.131624  292217 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:29.131695  292217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:34:29.133806  292217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:29.134050  292217 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:34:29.134111  292217 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:34:29.134220  292217 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-858659"
	I1212 00:34:29.134242  292217 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-858659"
	W1212 00:34:29.134251  292217 addons.go:248] addon storage-provisioner should already be in state true
	I1212 00:34:29.134244  292217 addons.go:70] Setting dashboard=true in profile "embed-certs-858659"
	I1212 00:34:29.134268  292217 addons.go:239] Setting addon dashboard=true in "embed-certs-858659"
	I1212 00:34:29.134259  292217 addons.go:70] Setting default-storageclass=true in profile "embed-certs-858659"
	W1212 00:34:29.134278  292217 addons.go:248] addon dashboard should already be in state true
	I1212 00:34:29.134290  292217 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:34:29.134294  292217 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:34:29.134312  292217 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:34:29.134291  292217 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-858659"
	I1212 00:34:29.134698  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:29.134803  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:29.134819  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:29.135940  292217 out.go:179] * Verifying Kubernetes components...
	I1212 00:34:29.137294  292217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:34:29.160012  292217 addons.go:239] Setting addon default-storageclass=true in "embed-certs-858659"
	W1212 00:34:29.160036  292217 addons.go:248] addon default-storageclass should already be in state true
	I1212 00:34:29.160062  292217 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:34:29.160531  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:29.161454  292217 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:34:29.162569  292217 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 00:34:29.162613  292217 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:34:29.162631  292217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:34:29.162683  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:29.164828  292217 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1212 00:34:26.502049  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: node "no-preload-675290" hosting pod "coredns-7d764666f9-44t4m" is not "Ready" (will retry)
	W1212 00:34:28.503136  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: node "no-preload-675290" hosting pod "coredns-7d764666f9-44t4m" is not "Ready" (will retry)
	I1212 00:34:29.166119  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 00:34:29.166135  292217 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 00:34:29.166188  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:29.192491  292217 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:34:29.192516  292217 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:34:29.192574  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:29.196179  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:29.198591  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:29.216901  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:29.287840  292217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:34:29.302952  292217 node_ready.go:35] waiting up to 6m0s for node "embed-certs-858659" to be "Ready" ...
	I1212 00:34:29.317900  292217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:34:29.321896  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 00:34:29.321912  292217 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 00:34:29.343648  292217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:34:29.344330  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 00:34:29.344372  292217 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 00:34:29.363923  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 00:34:29.363955  292217 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 00:34:29.381875  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 00:34:29.381897  292217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 00:34:29.396784  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 00:34:29.396803  292217 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 00:34:29.410654  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 00:34:29.410676  292217 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 00:34:29.425501  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 00:34:29.425524  292217 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 00:34:29.440231  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 00:34:29.440252  292217 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 00:34:29.452746  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:34:29.452766  292217 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 00:34:29.466329  292217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:34:30.818434  292217 node_ready.go:49] node "embed-certs-858659" is "Ready"
	I1212 00:34:30.818487  292217 node_ready.go:38] duration metric: took 1.515392528s for node "embed-certs-858659" to be "Ready" ...
	I1212 00:34:30.818508  292217 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:34:30.818565  292217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:34:31.468911  292217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.150974747s)
	I1212 00:34:31.468978  292217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.125284091s)
	I1212 00:34:31.469122  292217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.0027624s)
	I1212 00:34:31.469455  292217 api_server.go:72] duration metric: took 2.335374756s to wait for apiserver process to appear ...
	I1212 00:34:31.469505  292217 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:34:31.469524  292217 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 00:34:31.473590  292217 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-858659 addons enable metrics-server
	
	I1212 00:34:31.476459  292217 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:34:31.476505  292217 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:34:31.483691  292217 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1212 00:34:30.508733  263844 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.062075524s)
	W1212 00:34:30.508779  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1212 00:34:30.508790  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:30.508810  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:30.546082  263844 logs.go:123] Gathering logs for kube-apiserver [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106] ...
	I1212 00:34:30.546123  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:34:30.579092  263844 logs.go:123] Gathering logs for kube-controller-manager [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2] ...
	I1212 00:34:30.579118  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:30.604859  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:30.604882  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:30.657742  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:30.657769  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:30.671365  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:30.671388  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:30.705424  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:30.705450  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:31.484807  292217 addons.go:530] duration metric: took 2.350700971s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1212 00:34:31.969645  292217 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 00:34:31.975159  292217 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:34:31.975202  292217 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:34:28.982220  287750 pod_ready.go:104] pod "etcd-old-k8s-version-743506" is not "Ready", error: <nil>
	W1212 00:34:31.481681  287750 pod_ready.go:104] pod "etcd-old-k8s-version-743506" is not "Ready", error: <nil>
	I1212 00:34:31.981277  287750 pod_ready.go:94] pod "etcd-old-k8s-version-743506" is "Ready"
	I1212 00:34:31.981308  287750 pod_ready.go:86] duration metric: took 5.007040467s for pod "etcd-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:31.985958  287750 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:31.993506  287750 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-743506" is "Ready"
	I1212 00:34:31.993527  287750 pod_ready.go:86] duration metric: took 7.548054ms for pod "kube-apiserver-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:31.998043  287750 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.003355  287750 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-743506" is "Ready"
	I1212 00:34:32.003413  287750 pod_ready.go:86] duration metric: took 5.344333ms for pod "kube-controller-manager-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.006576  287750 pod_ready.go:83] waiting for pod "kube-proxy-pz8kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.178403  287750 pod_ready.go:94] pod "kube-proxy-pz8kt" is "Ready"
	I1212 00:34:32.178429  287750 pod_ready.go:86] duration metric: took 171.831568ms for pod "kube-proxy-pz8kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.379015  287750 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.778296  287750 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-743506" is "Ready"
	I1212 00:34:32.778333  287750 pod_ready.go:86] duration metric: took 399.28376ms for pod "kube-scheduler-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.778358  287750 pod_ready.go:40] duration metric: took 14.8158908s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:34:32.833106  287750 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1212 00:34:32.835985  287750 out.go:203] 
	W1212 00:34:32.837089  287750 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1212 00:34:32.838320  287750 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1212 00:34:32.839516  287750 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-743506" cluster and "default" namespace by default
	W1212 00:34:31.007103  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: node "no-preload-675290" hosting pod "coredns-7d764666f9-44t4m" is not "Ready" (will retry)
	W1212 00:34:33.503221  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: <nil>
	I1212 00:34:33.249325  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:34.659371  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:57038->192.168.85.2:8443: read: connection reset by peer
	I1212 00:34:34.659452  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:34.659598  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:34.720459  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:34.720562  263844 cri.go:89] found id: "e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:34:34.720572  263844 cri.go:89] found id: ""
	I1212 00:34:34.720583  263844 logs.go:282] 2 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106]
	I1212 00:34:34.720649  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:34.727624  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:34.732978  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:34.733038  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:34.771884  263844 cri.go:89] found id: ""
	I1212 00:34:34.771911  263844 logs.go:282] 0 containers: []
	W1212 00:34:34.771923  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:34.771930  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:34.771985  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:34.813253  263844 cri.go:89] found id: ""
	I1212 00:34:34.813292  263844 logs.go:282] 0 containers: []
	W1212 00:34:34.813304  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:34.813313  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:34.813375  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:34.854049  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:34.854075  263844 cri.go:89] found id: ""
	I1212 00:34:34.854084  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:34.854152  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:34.860190  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:34.860258  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:34.898840  263844 cri.go:89] found id: ""
	I1212 00:34:34.898872  263844 logs.go:282] 0 containers: []
	W1212 00:34:34.898883  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:34.898891  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:34.898952  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:34:34.937834  263844 cri.go:89] found id: "92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:34.937859  263844 cri.go:89] found id: ""
	I1212 00:34:34.937869  263844 logs.go:282] 1 containers: [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2]
	I1212 00:34:34.937925  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:34.944202  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:34:34.944414  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:34:34.982162  263844 cri.go:89] found id: ""
	I1212 00:34:34.982222  263844 logs.go:282] 0 containers: []
	W1212 00:34:34.982233  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:34:34.982250  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:34:34.982348  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:34:35.028882  263844 cri.go:89] found id: ""
	I1212 00:34:35.028907  263844 logs.go:282] 0 containers: []
	W1212 00:34:35.028919  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:34:35.028935  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:35.028955  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:35.123296  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:35.123511  263844 logs.go:123] Gathering logs for kube-apiserver [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106] ...
	I1212 00:34:35.123684  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:34:35.174086  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:35.174189  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:35.261332  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:35.261372  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:35.311106  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:34:35.311138  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:34:35.426171  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:35.426206  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:35.471138  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:35.471171  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:35.510352  263844 logs.go:123] Gathering logs for kube-controller-manager [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2] ...
	I1212 00:34:35.510384  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:35.548527  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:35.548558  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:32.469729  292217 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 00:34:32.474936  292217 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1212 00:34:32.475992  292217 api_server.go:141] control plane version: v1.34.2
	I1212 00:34:32.476016  292217 api_server.go:131] duration metric: took 1.006503678s to wait for apiserver health ...
	I1212 00:34:32.476025  292217 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:34:32.479628  292217 system_pods.go:59] 8 kube-system pods found
	I1212 00:34:32.479664  292217 system_pods.go:61] "coredns-66bc5c9577-8x66p" [1e3ac279-c897-4100-aa49-a94ed95d1b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:32.479681  292217 system_pods.go:61] "etcd-embed-certs-858659" [a6f85bfe-d87e-4cd1-a34a-6517b835d825] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:32.479695  292217 system_pods.go:61] "kindnet-9jvdg" [295eca47-46bb-43bf-981b-7320ba579410] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 00:34:32.479711  292217 system_pods.go:61] "kube-apiserver-embed-certs-858659" [b0ed7f59-ed08-4325-a71e-b6fc09d2f588] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:32.479726  292217 system_pods.go:61] "kube-controller-manager-embed-certs-858659" [c5be6475-d2da-40b2-97eb-4df8cd8a51db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:32.479738  292217 system_pods.go:61] "kube-proxy-httpr" [d6220e54-3a3a-4fbe-94e1-d0117757204a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 00:34:32.479757  292217 system_pods.go:61] "kube-scheduler-embed-certs-858659" [2c4596a5-b170-4c02-93ef-32fac96600c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:32.479769  292217 system_pods.go:61] "storage-provisioner" [1e3f7607-a2f4-4ca4-84c0-8cffb038ee03] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:34:32.479777  292217 system_pods.go:74] duration metric: took 3.744489ms to wait for pod list to return data ...
	I1212 00:34:32.479789  292217 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:34:32.481998  292217 default_sa.go:45] found service account: "default"
	I1212 00:34:32.482021  292217 default_sa.go:55] duration metric: took 2.221892ms for default service account to be created ...
	I1212 00:34:32.482031  292217 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:34:32.484891  292217 system_pods.go:86] 8 kube-system pods found
	I1212 00:34:32.484922  292217 system_pods.go:89] "coredns-66bc5c9577-8x66p" [1e3ac279-c897-4100-aa49-a94ed95d1b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:32.484941  292217 system_pods.go:89] "etcd-embed-certs-858659" [a6f85bfe-d87e-4cd1-a34a-6517b835d825] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:32.484954  292217 system_pods.go:89] "kindnet-9jvdg" [295eca47-46bb-43bf-981b-7320ba579410] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 00:34:32.484963  292217 system_pods.go:89] "kube-apiserver-embed-certs-858659" [b0ed7f59-ed08-4325-a71e-b6fc09d2f588] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:32.484982  292217 system_pods.go:89] "kube-controller-manager-embed-certs-858659" [c5be6475-d2da-40b2-97eb-4df8cd8a51db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:32.484994  292217 system_pods.go:89] "kube-proxy-httpr" [d6220e54-3a3a-4fbe-94e1-d0117757204a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 00:34:32.485002  292217 system_pods.go:89] "kube-scheduler-embed-certs-858659" [2c4596a5-b170-4c02-93ef-32fac96600c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:32.485010  292217 system_pods.go:89] "storage-provisioner" [1e3f7607-a2f4-4ca4-84c0-8cffb038ee03] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:34:32.485019  292217 system_pods.go:126] duration metric: took 2.981143ms to wait for k8s-apps to be running ...
	I1212 00:34:32.485026  292217 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:34:32.485080  292217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:34:32.502896  292217 system_svc.go:56] duration metric: took 17.862237ms WaitForService to wait for kubelet
	I1212 00:34:32.502923  292217 kubeadm.go:587] duration metric: took 3.368842736s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:34:32.502943  292217 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:34:32.506066  292217 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:34:32.506091  292217 node_conditions.go:123] node cpu capacity is 8
	I1212 00:34:32.506107  292217 node_conditions.go:105] duration metric: took 3.15793ms to run NodePressure ...
	I1212 00:34:32.506121  292217 start.go:242] waiting for startup goroutines ...
	I1212 00:34:32.506137  292217 start.go:247] waiting for cluster config update ...
	I1212 00:34:32.506152  292217 start.go:256] writing updated cluster config ...
	I1212 00:34:32.506449  292217 ssh_runner.go:195] Run: rm -f paused
	I1212 00:34:32.511014  292217 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:34:32.516674  292217 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8x66p" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 00:34:34.527697  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:34:36.531266  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:34:35.509694  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: <nil>
	W1212 00:34:38.002902  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: <nil>
	I1212 00:34:39.003919  290093 pod_ready.go:94] pod "coredns-7d764666f9-44t4m" is "Ready"
	I1212 00:34:39.003948  290093 pod_ready.go:86] duration metric: took 14.506978089s for pod "coredns-7d764666f9-44t4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.007164  290093 pod_ready.go:83] waiting for pod "etcd-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.012443  290093 pod_ready.go:94] pod "etcd-no-preload-675290" is "Ready"
	I1212 00:34:39.012467  290093 pod_ready.go:86] duration metric: took 5.280222ms for pod "etcd-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.015470  290093 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.019749  290093 pod_ready.go:94] pod "kube-apiserver-no-preload-675290" is "Ready"
	I1212 00:34:39.019769  290093 pod_ready.go:86] duration metric: took 4.25314ms for pod "kube-apiserver-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.022432  290093 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.603008  290093 pod_ready.go:94] pod "kube-controller-manager-no-preload-675290" is "Ready"
	I1212 00:34:39.603053  290093 pod_ready.go:86] duration metric: took 580.604646ms for pod "kube-controller-manager-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.801951  290093 pod_ready.go:83] waiting for pod "kube-proxy-7pxpp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:40.202574  290093 pod_ready.go:94] pod "kube-proxy-7pxpp" is "Ready"
	I1212 00:34:40.202612  290093 pod_ready.go:86] duration metric: took 400.60658ms for pod "kube-proxy-7pxpp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:40.402173  290093 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:40.801438  290093 pod_ready.go:94] pod "kube-scheduler-no-preload-675290" is "Ready"
	I1212 00:34:40.801466  290093 pod_ready.go:86] duration metric: took 399.266926ms for pod "kube-scheduler-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:40.801497  290093 pod_ready.go:40] duration metric: took 16.307864565s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:34:40.857643  290093 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1212 00:34:40.899899  290093 out.go:179] * Done! kubectl is now configured to use "no-preload-675290" cluster and "default" namespace by default
	I1212 00:34:38.070909  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:38.071317  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:34:38.071368  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:38.071437  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:38.100977  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:38.101000  263844 cri.go:89] found id: ""
	I1212 00:34:38.101008  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:34:38.101055  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:38.105578  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:38.105642  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:38.137933  263844 cri.go:89] found id: ""
	I1212 00:34:38.137961  263844 logs.go:282] 0 containers: []
	W1212 00:34:38.137977  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:38.137986  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:38.138051  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:38.172506  263844 cri.go:89] found id: ""
	I1212 00:34:38.172711  263844 logs.go:282] 0 containers: []
	W1212 00:34:38.172783  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:38.172849  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:38.172980  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:38.209393  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:38.209411  263844 cri.go:89] found id: ""
	I1212 00:34:38.209418  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:38.209463  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:38.213539  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:38.213610  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:38.255964  263844 cri.go:89] found id: ""
	I1212 00:34:38.255987  263844 logs.go:282] 0 containers: []
	W1212 00:34:38.255997  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:38.256005  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:38.256070  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:34:38.294229  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:38.294319  263844 cri.go:89] found id: "92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:38.294326  263844 cri.go:89] found id: ""
	I1212 00:34:38.294333  263844 logs.go:282] 2 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2]
	I1212 00:34:38.294395  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:38.299827  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:38.304884  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:34:38.304948  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:34:38.345668  263844 cri.go:89] found id: ""
	I1212 00:34:38.345711  263844 logs.go:282] 0 containers: []
	W1212 00:34:38.345724  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:34:38.345733  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:34:38.345800  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:34:38.383644  263844 cri.go:89] found id: ""
	I1212 00:34:38.383671  263844 logs.go:282] 0 containers: []
	W1212 00:34:38.383683  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:34:38.383703  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:34:38.383716  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:34:38.511578  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:38.511613  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:38.593958  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:38.593982  263844 logs.go:123] Gathering logs for kube-controller-manager [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2] ...
	I1212 00:34:38.593999  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:38.630769  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:38.630799  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:38.711173  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:38.711213  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:38.758424  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:38.758457  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:38.779867  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:38.779897  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:38.821126  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:38.821166  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:38.859790  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:34:38.859830  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:41.397624  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:41.398042  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:34:41.398100  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:41.398171  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:41.433192  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:41.433213  263844 cri.go:89] found id: ""
	I1212 00:34:41.433223  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:34:41.433281  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.437728  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:41.437792  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:41.463618  263844 cri.go:89] found id: ""
	I1212 00:34:41.463643  263844 logs.go:282] 0 containers: []
	W1212 00:34:41.463653  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:41.463660  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:41.463731  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:41.490995  263844 cri.go:89] found id: ""
	I1212 00:34:41.491018  263844 logs.go:282] 0 containers: []
	W1212 00:34:41.491026  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:41.491035  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:41.491093  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:41.518246  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:41.518267  263844 cri.go:89] found id: ""
	I1212 00:34:41.518276  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:41.518332  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.522787  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:41.522849  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:41.549671  263844 cri.go:89] found id: ""
	I1212 00:34:41.549706  263844 logs.go:282] 0 containers: []
	W1212 00:34:41.549716  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:41.549723  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:41.549783  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:34:41.577845  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:41.577868  263844 cri.go:89] found id: "92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:41.577874  263844 cri.go:89] found id: ""
	I1212 00:34:41.577882  263844 logs.go:282] 2 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2]
	I1212 00:34:41.577929  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.581784  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.585354  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:34:41.585419  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:34:41.615221  263844 cri.go:89] found id: ""
	I1212 00:34:41.615254  263844 logs.go:282] 0 containers: []
	W1212 00:34:41.615265  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:34:41.615274  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:34:41.615336  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:34:41.641215  263844 cri.go:89] found id: ""
	I1212 00:34:41.641238  263844 logs.go:282] 0 containers: []
	W1212 00:34:41.641248  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:34:41.641266  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:41.641280  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:41.699142  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:41.699160  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:41.699176  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:41.728077  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:41.728106  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:41.753905  263844 logs.go:123] Gathering logs for kube-controller-manager [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2] ...
	I1212 00:34:41.753927  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:41.778651  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:41.778679  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:41.807391  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:34:41.807414  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:41.831717  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:41.831741  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:41.882004  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:34:41.882028  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:34:41.958828  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:41.958859  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 00:34:39.022400  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:34:41.025312  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	I1212 00:34:44.472376  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:44.472832  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:34:44.472887  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:44.472950  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:44.499664  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:44.499683  263844 cri.go:89] found id: ""
	I1212 00:34:44.499690  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:34:44.499740  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:44.503544  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:44.503613  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:44.530338  263844 cri.go:89] found id: ""
	I1212 00:34:44.530363  263844 logs.go:282] 0 containers: []
	W1212 00:34:44.530373  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:44.530380  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:44.530421  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:44.556031  263844 cri.go:89] found id: ""
	I1212 00:34:44.556054  263844 logs.go:282] 0 containers: []
	W1212 00:34:44.556064  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:44.556071  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:44.556130  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:44.581377  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:44.581397  263844 cri.go:89] found id: ""
	I1212 00:34:44.581406  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:44.581504  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:44.585206  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:44.585254  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:44.609906  263844 cri.go:89] found id: ""
	I1212 00:34:44.609929  263844 logs.go:282] 0 containers: []
	W1212 00:34:44.609937  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:44.609942  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:44.609995  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:34:44.635568  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:44.635590  263844 cri.go:89] found id: "92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:44.635594  263844 cri.go:89] found id: ""
	I1212 00:34:44.635601  263844 logs.go:282] 2 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2]
	I1212 00:34:44.635645  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:44.639406  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:44.642913  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:34:44.642978  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:34:44.667080  263844 cri.go:89] found id: ""
	I1212 00:34:44.667105  263844 logs.go:282] 0 containers: []
	W1212 00:34:44.667114  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:34:44.667120  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:34:44.667166  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:34:44.690884  263844 cri.go:89] found id: ""
	I1212 00:34:44.690908  263844 logs.go:282] 0 containers: []
	W1212 00:34:44.690917  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:34:44.690929  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:44.690940  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:44.741690  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:44.741717  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:44.769952  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:34:44.769978  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:34:44.845857  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:44.845885  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:44.898939  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:44.898959  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:44.898973  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:44.929908  263844 logs.go:123] Gathering logs for kube-controller-manager [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2] ...
	I1212 00:34:44.929935  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:44.955084  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:44.955105  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:44.968097  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:44.968119  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:44.992542  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:34:44.992564  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	W1212 00:34:43.533865  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:34:46.022030  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 12 00:34:33 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:33.694598569Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5b48c3d11caad3c9bd7c8e78c456b1df729430761f10acd0694f36df144daba3/merged/etc/group: no such file or directory"
	Dec 12 00:34:33 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:33.69502243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:33 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:33.721378208Z" level=info msg="Created container c00ac9a4c0b0a4a61b6f5dcab16513d50e33b4ba5166606551722a4571ca2420: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jhg57/kubernetes-dashboard" id=4119b681-6d2e-49cb-8c26-0ca6373c60e5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:33 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:33.721971193Z" level=info msg="Starting container: c00ac9a4c0b0a4a61b6f5dcab16513d50e33b4ba5166606551722a4571ca2420" id=91661fc9-6ad0-4293-b3c4-403436cd0d4e name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:34:33 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:33.723824693Z" level=info msg="Started container" PID=1540 containerID=c00ac9a4c0b0a4a61b6f5dcab16513d50e33b4ba5166606551722a4571ca2420 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jhg57/kubernetes-dashboard id=91661fc9-6ad0-4293-b3c4-403436cd0d4e name=/runtime.v1.RuntimeService/StartContainer sandboxID=9993ac356f33f26cd31bd36e7037a905af90c1b2ace3acae58222c407baea99b
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.614669141Z" level=info msg="Pulled image: registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=85473009-3132-4aec-84f6-491cba3dfa68 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.615993928Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c78746d3-710b-464c-a3fd-07b522c19fb2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.619234444Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn/dashboard-metrics-scraper" id=397fcc47-aa50-49d7-95a8-d791be24f0fd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.619364372Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.628977387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.629674244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.657815875Z" level=info msg="Created container 23563703b720516cdc853ac623892f0e3d24ba7876e0a91d65dc37db5bf74036: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn/dashboard-metrics-scraper" id=397fcc47-aa50-49d7-95a8-d791be24f0fd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.659002058Z" level=info msg="Starting container: 23563703b720516cdc853ac623892f0e3d24ba7876e0a91d65dc37db5bf74036" id=a91dc954-e9a9-435e-bd30-00f7bf1ffb38 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:34:36 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:36.661879543Z" level=info msg="Started container" PID=1749 containerID=23563703b720516cdc853ac623892f0e3d24ba7876e0a91d65dc37db5bf74036 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn/dashboard-metrics-scraper id=a91dc954-e9a9-435e-bd30-00f7bf1ffb38 name=/runtime.v1.RuntimeService/StartContainer sandboxID=54ea13de95b23d7a5e580eab3ce50dc74364353eaf06d031d582dee24a97487d
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.67436137Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4645e31d-4290-46fb-a4c1-d916a3b48fc0 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.677452286Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0a625174-928f-4278-8615-fb38448b0680 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.680714091Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn/dashboard-metrics-scraper" id=ce66a5fe-95f7-4608-9ac0-b5d981aa7823 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.680850864Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.689626852Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.690352679Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.723033892Z" level=info msg="Created container e903d75d8008fddf71158194c0a15dd7caaba9ceb3c693696d81f596d545c2af: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn/dashboard-metrics-scraper" id=ce66a5fe-95f7-4608-9ac0-b5d981aa7823 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.723640377Z" level=info msg="Starting container: e903d75d8008fddf71158194c0a15dd7caaba9ceb3c693696d81f596d545c2af" id=3b2aa1cb-01cc-4506-90e0-ea1e44897ad8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:34:37 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:37.726280985Z" level=info msg="Started container" PID=1763 containerID=e903d75d8008fddf71158194c0a15dd7caaba9ceb3c693696d81f596d545c2af description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn/dashboard-metrics-scraper id=3b2aa1cb-01cc-4506-90e0-ea1e44897ad8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=54ea13de95b23d7a5e580eab3ce50dc74364353eaf06d031d582dee24a97487d
	Dec 12 00:34:38 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:38.68315069Z" level=info msg="Removing container: 23563703b720516cdc853ac623892f0e3d24ba7876e0a91d65dc37db5bf74036" id=819c9eb2-5109-48ec-bb70-ad122656a7a1 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:34:38 old-k8s-version-743506 crio[570]: time="2025-12-12T00:34:38.697426134Z" level=info msg="Removed container 23563703b720516cdc853ac623892f0e3d24ba7876e0a91d65dc37db5bf74036: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn/dashboard-metrics-scraper" id=819c9eb2-5109-48ec-bb70-ad122656a7a1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	e903d75d8008f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   1                   54ea13de95b23       dashboard-metrics-scraper-5f989dc9cf-r64gn       kubernetes-dashboard
	c00ac9a4c0b0a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   17 seconds ago      Running             kubernetes-dashboard        0                   9993ac356f33f       kubernetes-dashboard-8694d4445c-jhg57            kubernetes-dashboard
	590f315a0e414       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           26 seconds ago      Running             coredns                     0                   3bab8358c495f       coredns-5dd5756b68-nxwdc                         kube-system
	f1a9f94ef8f89       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           26 seconds ago      Running             busybox                     1                   526dd63209738       busybox                                          default
	d5c10c053a230       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           34 seconds ago      Running             kindnet-cni                 0                   767adbec4f854       kindnet-s2gvw                                    kube-system
	40f0aa5d7111a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           34 seconds ago      Exited              storage-provisioner         0                   e00e88e41f26b       storage-provisioner                              kube-system
	46ced247bb6cb       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           34 seconds ago      Running             kube-proxy                  0                   0f16dc7b7fe33       kube-proxy-pz8kt                                 kube-system
	17479f6c2196c       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           37 seconds ago      Running             kube-controller-manager     0                   493af1802641d       kube-controller-manager-old-k8s-version-743506   kube-system
	a0ad080a093dd       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           37 seconds ago      Running             kube-apiserver              0                   c52b2141f4cc4       kube-apiserver-old-k8s-version-743506            kube-system
	d463fe18198b2       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           37 seconds ago      Running             kube-scheduler              0                   ec21348ab30a5       kube-scheduler-old-k8s-version-743506            kube-system
	be598b8792667       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           37 seconds ago      Running             etcd                        0                   7eadf74e9de8c       etcd-old-k8s-version-743506                      kube-system
	
	
	==> coredns [590f315a0e41429b9947025cf60b15230faac5f9cb474a9172b52736e8344a73] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59572 - 64129 "HINFO IN 2520603107590759331.6970155470677605587. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.488605797s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-743506
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-743506
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=old-k8s-version-743506
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_33_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:33:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-743506
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:34:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:34:26 +0000   Fri, 12 Dec 2025 00:33:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:34:26 +0000   Fri, 12 Dec 2025 00:33:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:34:26 +0000   Fri, 12 Dec 2025 00:33:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:34:26 +0000   Fri, 12 Dec 2025 00:34:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-743506
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                6e4a36d1-9d16-43c1-a591-2e531ad940c7
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 coredns-5dd5756b68-nxwdc                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     88s
	  kube-system                 etcd-old-k8s-version-743506                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         101s
	  kube-system                 kindnet-s2gvw                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      89s
	  kube-system                 kube-apiserver-old-k8s-version-743506             250m (3%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-controller-manager-old-k8s-version-743506    200m (2%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-pz8kt                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-scheduler-old-k8s-version-743506             100m (1%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-r64gn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-jhg57             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 86s                kube-proxy       
	  Normal  Starting                 34s                kube-proxy       
	  Normal  NodeHasSufficientMemory  101s               kubelet          Node old-k8s-version-743506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s               kubelet          Node old-k8s-version-743506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s               kubelet          Node old-k8s-version-743506 status is now: NodeHasSufficientPID
	  Normal  Starting                 101s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           89s                node-controller  Node old-k8s-version-743506 event: Registered Node old-k8s-version-743506 in Controller
	  Normal  NodeReady                74s                kubelet          Node old-k8s-version-743506 status is now: NodeReady
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node old-k8s-version-743506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node old-k8s-version-743506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node old-k8s-version-743506 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22s                node-controller  Node old-k8s-version-743506 event: Registered Node old-k8s-version-743506 in Controller
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [be598b879266780ddd79387bcfed0dfa8ab737f26c38131f8d1479fbb3247bab] <==
	{"level":"info","ts":"2025-12-12T00:34:14.137189Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-12T00:34:14.137213Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-12T00:34:14.137119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-12T00:34:14.137346Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-12T00:34:14.13751Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-12T00:34:14.13756Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-12T00:34:14.139437Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-12T00:34:14.139622Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-12T00:34:14.139654Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-12T00:34:14.139749Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-12T00:34:14.139783Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-12T00:34:15.529615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-12T00:34:15.529657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-12T00:34:15.529693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-12T00:34:15.529707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-12T00:34:15.529712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-12T00:34:15.529721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-12T00:34:15.529727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-12T00:34:15.530855Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-12T00:34:15.530872Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-12T00:34:15.530861Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-743506 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-12T00:34:15.531112Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-12T00:34:15.531138Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-12T00:34:15.532158Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-12T00:34:15.532157Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 00:34:51 up  1:17,  0 user,  load average: 5.47, 3.20, 1.96
	Linux old-k8s-version-743506 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d5c10c053a2306b36bed7e95ea22c18e8c71e4916805105400cca4e715b39675] <==
	I1212 00:34:17.167090       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 00:34:17.167332       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1212 00:34:17.167449       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:34:17.167469       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:34:17.167502       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:34:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:34:17.364837       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:34:17.364866       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:34:17.364878       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:34:17.365029       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 00:34:17.665061       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:34:17.665108       1 metrics.go:72] Registering metrics
	I1212 00:34:17.665181       1 controller.go:711] "Syncing nftables rules"
	I1212 00:34:27.365159       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:34:27.365205       1 main.go:301] handling current node
	I1212 00:34:37.365383       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:34:37.365422       1 main.go:301] handling current node
	I1212 00:34:47.364972       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:34:47.364999       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a0ad080a093ddeaa725b3aa7bd29e92715f2fa158214966408422908cd7efbce] <==
	I1212 00:34:16.440290       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1212 00:34:16.538966       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 00:34:16.538998       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 00:34:16.539132       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 00:34:16.539151       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 00:34:16.539235       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 00:34:16.539286       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 00:34:16.539298       1 aggregator.go:166] initial CRD sync complete...
	I1212 00:34:16.539308       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 00:34:16.539314       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 00:34:16.539322       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:34:16.540162       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 00:34:16.576233       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:34:16.582964       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 00:34:17.324593       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 00:34:17.351092       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 00:34:17.370151       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:34:17.376143       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:34:17.381629       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 00:34:17.412112       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.70.116"}
	I1212 00:34:17.424557       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.171.209"}
	I1212 00:34:17.442604       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 00:34:29.556336       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 00:34:29.703813       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1212 00:34:29.804490       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [17479f6c2196c0e03f5729c15203a934ede701786b535f944214967080179be1] <==
	I1212 00:34:29.508642       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 00:34:29.573290       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 00:34:29.706416       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1212 00:34:29.707766       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1212 00:34:29.911201       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-jhg57"
	I1212 00:34:29.911726       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-r64gn"
	I1212 00:34:29.920281       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="214.079708ms"
	I1212 00:34:29.920652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="213.58192ms"
	I1212 00:34:29.930797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.470273ms"
	I1212 00:34:29.931416       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.716135ms"
	I1212 00:34:29.931571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.821µs"
	I1212 00:34:29.932394       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 00:34:29.942864       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.384µs"
	I1212 00:34:29.943629       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.788692ms"
	I1212 00:34:29.943752       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.812µs"
	I1212 00:34:29.955853       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 00:34:29.955984       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 00:34:34.735771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="22.54929ms"
	I1212 00:34:34.737087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="229.703µs"
	I1212 00:34:36.692277       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.060824ms"
	I1212 00:34:36.692512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.565µs"
	I1212 00:34:37.696127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="6.826966ms"
	I1212 00:34:37.696249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.679µs"
	I1212 00:34:38.702078       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.686µs"
	I1212 00:34:39.699100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.854µs"
	
	
	==> kube-proxy [46ced247bb6cb9df1846e17c8e1a5267a02c76daf3d5721b9ab21a684f9f59d7] <==
	I1212 00:34:16.985664       1 server_others.go:69] "Using iptables proxy"
	I1212 00:34:16.997866       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1212 00:34:17.017237       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:34:17.019505       1 server_others.go:152] "Using iptables Proxier"
	I1212 00:34:17.019534       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 00:34:17.019545       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 00:34:17.019569       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 00:34:17.019808       1 server.go:846] "Version info" version="v1.28.0"
	I1212 00:34:17.019823       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:34:17.020393       1 config.go:97] "Starting endpoint slice config controller"
	I1212 00:34:17.020431       1 config.go:315] "Starting node config controller"
	I1212 00:34:17.020435       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 00:34:17.020450       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:34:17.020503       1 config.go:188] "Starting service config controller"
	I1212 00:34:17.020512       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 00:34:17.121021       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:34:17.121047       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 00:34:17.121145       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d463fe18198b237fef4bf76765fe49362a2634f1272c155ffbf6c2967f301bf9] <==
	I1212 00:34:14.631509       1 serving.go:348] Generated self-signed cert in-memory
	W1212 00:34:16.474280       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 00:34:16.474328       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 00:34:16.474348       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 00:34:16.474360       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 00:34:16.492463       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1212 00:34:16.492512       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:34:16.494159       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:34:16.494208       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:34:16.495263       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1212 00:34:16.496352       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 00:34:16.594980       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 12 00:34:20 old-k8s-version-743506 kubelet[734]: E1212 00:34:20.187275     734 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 00:34:20 old-k8s-version-743506 kubelet[734]: E1212 00:34:20.187374     734 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e73711a2-208b-41a6-a47f-6253638cfdf2-config-volume podName:e73711a2-208b-41a6-a47f-6253638cfdf2 nodeName:}" failed. No retries permitted until 2025-12-12 00:34:24.187352984 +0000 UTC m=+10.684196781 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e73711a2-208b-41a6-a47f-6253638cfdf2-config-volume") pod "coredns-5dd5756b68-nxwdc" (UID: "e73711a2-208b-41a6-a47f-6253638cfdf2") : object "kube-system"/"coredns" not registered
	Dec 12 00:34:20 old-k8s-version-743506 kubelet[734]: E1212 00:34:20.288076     734 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 12 00:34:20 old-k8s-version-743506 kubelet[734]: E1212 00:34:20.288110     734 projected.go:198] Error preparing data for projected volume kube-api-access-72247 for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Dec 12 00:34:20 old-k8s-version-743506 kubelet[734]: E1212 00:34:20.288182     734 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a0e8330-9dea-4063-9369-234ee8e6ef43-kube-api-access-72247 podName:1a0e8330-9dea-4063-9369-234ee8e6ef43 nodeName:}" failed. No retries permitted until 2025-12-12 00:34:24.288164931 +0000 UTC m=+10.785008723 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-72247" (UniqueName: "kubernetes.io/projected/1a0e8330-9dea-4063-9369-234ee8e6ef43-kube-api-access-72247") pod "busybox" (UID: "1a0e8330-9dea-4063-9369-234ee8e6ef43") : object "default"/"kube-root-ca.crt" not registered
	Dec 12 00:34:29 old-k8s-version-743506 kubelet[734]: I1212 00:34:29.919688     734 topology_manager.go:215] "Topology Admit Handler" podUID="d0734f6b-f43b-4c8f-a510-cb132816b525" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-jhg57"
	Dec 12 00:34:29 old-k8s-version-743506 kubelet[734]: I1212 00:34:29.922204     734 topology_manager.go:215] "Topology Admit Handler" podUID="f6af7f86-6269-45c7-9d04-9157687f0860" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-r64gn"
	Dec 12 00:34:29 old-k8s-version-743506 kubelet[734]: I1212 00:34:29.947235     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f6af7f86-6269-45c7-9d04-9157687f0860-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-r64gn\" (UID: \"f6af7f86-6269-45c7-9d04-9157687f0860\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn"
	Dec 12 00:34:29 old-k8s-version-743506 kubelet[734]: I1212 00:34:29.947296     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwgmj\" (UniqueName: \"kubernetes.io/projected/f6af7f86-6269-45c7-9d04-9157687f0860-kube-api-access-cwgmj\") pod \"dashboard-metrics-scraper-5f989dc9cf-r64gn\" (UID: \"f6af7f86-6269-45c7-9d04-9157687f0860\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn"
	Dec 12 00:34:29 old-k8s-version-743506 kubelet[734]: I1212 00:34:29.947556     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d0734f6b-f43b-4c8f-a510-cb132816b525-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-jhg57\" (UID: \"d0734f6b-f43b-4c8f-a510-cb132816b525\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jhg57"
	Dec 12 00:34:29 old-k8s-version-743506 kubelet[734]: I1212 00:34:29.947652     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7cg7\" (UniqueName: \"kubernetes.io/projected/d0734f6b-f43b-4c8f-a510-cb132816b525-kube-api-access-p7cg7\") pod \"kubernetes-dashboard-8694d4445c-jhg57\" (UID: \"d0734f6b-f43b-4c8f-a510-cb132816b525\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jhg57"
	Dec 12 00:34:36 old-k8s-version-743506 kubelet[734]: I1212 00:34:36.682525     734 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn" podStartSLOduration=1.315146408 podCreationTimestamp="2025-12-12 00:34:29 +0000 UTC" firstStartedPulling="2025-12-12 00:34:30.247716769 +0000 UTC m=+16.744560557" lastFinishedPulling="2025-12-12 00:34:36.615008153 +0000 UTC m=+23.111851935" observedRunningTime="2025-12-12 00:34:36.680521682 +0000 UTC m=+23.177365483" watchObservedRunningTime="2025-12-12 00:34:36.682437786 +0000 UTC m=+23.179281620"
	Dec 12 00:34:36 old-k8s-version-743506 kubelet[734]: I1212 00:34:36.682866     734 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jhg57" podStartSLOduration=4.241625533 podCreationTimestamp="2025-12-12 00:34:29 +0000 UTC" firstStartedPulling="2025-12-12 00:34:30.244825772 +0000 UTC m=+16.741669560" lastFinishedPulling="2025-12-12 00:34:33.686028948 +0000 UTC m=+20.182872732" observedRunningTime="2025-12-12 00:34:34.706537696 +0000 UTC m=+21.203381490" watchObservedRunningTime="2025-12-12 00:34:36.682828705 +0000 UTC m=+23.179672504"
	Dec 12 00:34:37 old-k8s-version-743506 kubelet[734]: I1212 00:34:37.673779     734 scope.go:117] "RemoveContainer" containerID="23563703b720516cdc853ac623892f0e3d24ba7876e0a91d65dc37db5bf74036"
	Dec 12 00:34:38 old-k8s-version-743506 kubelet[734]: I1212 00:34:38.681378     734 scope.go:117] "RemoveContainer" containerID="23563703b720516cdc853ac623892f0e3d24ba7876e0a91d65dc37db5bf74036"
	Dec 12 00:34:38 old-k8s-version-743506 kubelet[734]: I1212 00:34:38.681738     734 scope.go:117] "RemoveContainer" containerID="e903d75d8008fddf71158194c0a15dd7caaba9ceb3c693696d81f596d545c2af"
	Dec 12 00:34:38 old-k8s-version-743506 kubelet[734]: E1212 00:34:38.682114     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r64gn_kubernetes-dashboard(f6af7f86-6269-45c7-9d04-9157687f0860)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn" podUID="f6af7f86-6269-45c7-9d04-9157687f0860"
	Dec 12 00:34:39 old-k8s-version-743506 kubelet[734]: I1212 00:34:39.686109     734 scope.go:117] "RemoveContainer" containerID="e903d75d8008fddf71158194c0a15dd7caaba9ceb3c693696d81f596d545c2af"
	Dec 12 00:34:39 old-k8s-version-743506 kubelet[734]: E1212 00:34:39.686510     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r64gn_kubernetes-dashboard(f6af7f86-6269-45c7-9d04-9157687f0860)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn" podUID="f6af7f86-6269-45c7-9d04-9157687f0860"
	Dec 12 00:34:40 old-k8s-version-743506 kubelet[734]: I1212 00:34:40.688457     734 scope.go:117] "RemoveContainer" containerID="e903d75d8008fddf71158194c0a15dd7caaba9ceb3c693696d81f596d545c2af"
	Dec 12 00:34:40 old-k8s-version-743506 kubelet[734]: E1212 00:34:40.688959     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r64gn_kubernetes-dashboard(f6af7f86-6269-45c7-9d04-9157687f0860)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r64gn" podUID="f6af7f86-6269-45c7-9d04-9157687f0860"
	Dec 12 00:34:46 old-k8s-version-743506 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 00:34:46 old-k8s-version-743506 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 00:34:46 old-k8s-version-743506 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:34:46 old-k8s-version-743506 systemd[1]: kubelet.service: Consumed 1.106s CPU time.
	
	
	==> kubernetes-dashboard [c00ac9a4c0b0a4a61b6f5dcab16513d50e33b4ba5166606551722a4571ca2420] <==
	2025/12/12 00:34:33 Using namespace: kubernetes-dashboard
	2025/12/12 00:34:33 Using in-cluster config to connect to apiserver
	2025/12/12 00:34:33 Using secret token for csrf signing
	2025/12/12 00:34:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/12 00:34:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/12 00:34:33 Successful initial request to the apiserver, version: v1.28.0
	2025/12/12 00:34:33 Generating JWE encryption key
	2025/12/12 00:34:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/12 00:34:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/12 00:34:34 Initializing JWE encryption key from synchronized object
	2025/12/12 00:34:34 Creating in-cluster Sidecar client
	2025/12/12 00:34:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 00:34:34 Serving insecurely on HTTP port: 9090
	2025/12/12 00:34:33 Starting overwatch
	
	
	==> storage-provisioner [40f0aa5d7111ad953f0cf93f67a62cb204d4fc074605fd5ab577a94dc6d2d0a2] <==
	I1212 00:34:16.947897       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 00:34:46.950297       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-743506 -n old-k8s-version-743506
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-743506 -n old-k8s-version-743506: exit status 2 (317.971659ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-743506 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-675290 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-675290 --alsologtostderr -v=1: exit status 80 (1.57165483s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-675290 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:34:52.693937  298931 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:34:52.694206  298931 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:34:52.694217  298931 out.go:374] Setting ErrFile to fd 2...
	I1212 00:34:52.694221  298931 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:34:52.694562  298931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:34:52.694853  298931 out.go:368] Setting JSON to false
	I1212 00:34:52.694871  298931 mustload.go:66] Loading cluster: no-preload-675290
	I1212 00:34:52.695283  298931 config.go:182] Loaded profile config "no-preload-675290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:34:52.695727  298931 cli_runner.go:164] Run: docker container inspect no-preload-675290 --format={{.State.Status}}
	I1212 00:34:52.714121  298931 host.go:66] Checking if "no-preload-675290" exists ...
	I1212 00:34:52.714420  298931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:34:52.768237  298931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:90 SystemTime:2025-12-12 00:34:52.758899797 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:34:52.768851  298931 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-675290 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1212 00:34:52.771138  298931 out.go:179] * Pausing node no-preload-675290 ... 
	I1212 00:34:52.772297  298931 host.go:66] Checking if "no-preload-675290" exists ...
	I1212 00:34:52.772581  298931 ssh_runner.go:195] Run: systemctl --version
	I1212 00:34:52.772618  298931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-675290
	I1212 00:34:52.789698  298931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/no-preload-675290/id_rsa Username:docker}
	I1212 00:34:52.892179  298931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:34:52.904184  298931 pause.go:52] kubelet running: true
	I1212 00:34:52.904245  298931 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:34:53.060121  298931 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:34:53.060234  298931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:34:53.124742  298931 cri.go:89] found id: "8756cb93787e3b94062fff7a5ad449f413a0953cb033edf4e89087b4b35d0ecb"
	I1212 00:34:53.124766  298931 cri.go:89] found id: "0b0225504b360347799aa89a639fd044dab1a6bb89d5ff5364dfdd123cb5e696"
	I1212 00:34:53.124773  298931 cri.go:89] found id: "cc18f21bf8c65e549247cc01742151c3b3c2a3b4c6e9af0b6f20a581191484c1"
	I1212 00:34:53.124778  298931 cri.go:89] found id: "c773979867a3857d504f5fe3ee07988f68ca6fef4d7f84d7fafacdb70902f2f8"
	I1212 00:34:53.124783  298931 cri.go:89] found id: "ac6219e970ded4f9c3fd189c0a0034da3c97b482398366b730cf9355f56749e4"
	I1212 00:34:53.124788  298931 cri.go:89] found id: "09dd91a035f4ed00c98271aa71b03f50c8302250a3bad1e75601f3f063c96c11"
	I1212 00:34:53.124793  298931 cri.go:89] found id: "b88c2733616198eb832ac5f6598b52771ff77dc17d732bbee216a8ceba1085c9"
	I1212 00:34:53.124797  298931 cri.go:89] found id: "3787e40b4cb5a662778290c05ca00d3817fdfbe9b3c1be44d55d057774bc5b3f"
	I1212 00:34:53.124802  298931 cri.go:89] found id: "ba47d0e7ab02f71497aa68c85a3a258865d44c5f0904dd855bffcd51ab9bcb6f"
	I1212 00:34:53.124814  298931 cri.go:89] found id: "fc635a362d3ce8418e951572f54c063d0f426b6b9593da951cb6600cd1fcfc3b"
	I1212 00:34:53.124835  298931 cri.go:89] found id: ""
	I1212 00:34:53.124880  298931 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:34:53.135852  298931 retry.go:31] will retry after 298.648315ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:34:53Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:34:53.435352  298931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:34:53.447659  298931 pause.go:52] kubelet running: false
	I1212 00:34:53.447730  298931 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:34:53.586083  298931 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:34:53.586174  298931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:34:53.650267  298931 cri.go:89] found id: "8756cb93787e3b94062fff7a5ad449f413a0953cb033edf4e89087b4b35d0ecb"
	I1212 00:34:53.650293  298931 cri.go:89] found id: "0b0225504b360347799aa89a639fd044dab1a6bb89d5ff5364dfdd123cb5e696"
	I1212 00:34:53.650298  298931 cri.go:89] found id: "cc18f21bf8c65e549247cc01742151c3b3c2a3b4c6e9af0b6f20a581191484c1"
	I1212 00:34:53.650304  298931 cri.go:89] found id: "c773979867a3857d504f5fe3ee07988f68ca6fef4d7f84d7fafacdb70902f2f8"
	I1212 00:34:53.650308  298931 cri.go:89] found id: "ac6219e970ded4f9c3fd189c0a0034da3c97b482398366b730cf9355f56749e4"
	I1212 00:34:53.650321  298931 cri.go:89] found id: "09dd91a035f4ed00c98271aa71b03f50c8302250a3bad1e75601f3f063c96c11"
	I1212 00:34:53.650329  298931 cri.go:89] found id: "b88c2733616198eb832ac5f6598b52771ff77dc17d732bbee216a8ceba1085c9"
	I1212 00:34:53.650334  298931 cri.go:89] found id: "3787e40b4cb5a662778290c05ca00d3817fdfbe9b3c1be44d55d057774bc5b3f"
	I1212 00:34:53.650339  298931 cri.go:89] found id: "ba47d0e7ab02f71497aa68c85a3a258865d44c5f0904dd855bffcd51ab9bcb6f"
	I1212 00:34:53.650363  298931 cri.go:89] found id: "fc635a362d3ce8418e951572f54c063d0f426b6b9593da951cb6600cd1fcfc3b"
	I1212 00:34:53.650371  298931 cri.go:89] found id: ""
	I1212 00:34:53.650443  298931 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:34:53.662722  298931 retry.go:31] will retry after 285.460754ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:34:53Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:34:53.949230  298931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:34:53.963745  298931 pause.go:52] kubelet running: false
	I1212 00:34:53.963801  298931 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:34:54.112873  298931 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:34:54.112950  298931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:34:54.179420  298931 cri.go:89] found id: "8756cb93787e3b94062fff7a5ad449f413a0953cb033edf4e89087b4b35d0ecb"
	I1212 00:34:54.179452  298931 cri.go:89] found id: "0b0225504b360347799aa89a639fd044dab1a6bb89d5ff5364dfdd123cb5e696"
	I1212 00:34:54.179467  298931 cri.go:89] found id: "cc18f21bf8c65e549247cc01742151c3b3c2a3b4c6e9af0b6f20a581191484c1"
	I1212 00:34:54.179497  298931 cri.go:89] found id: "c773979867a3857d504f5fe3ee07988f68ca6fef4d7f84d7fafacdb70902f2f8"
	I1212 00:34:54.179502  298931 cri.go:89] found id: "ac6219e970ded4f9c3fd189c0a0034da3c97b482398366b730cf9355f56749e4"
	I1212 00:34:54.179507  298931 cri.go:89] found id: "09dd91a035f4ed00c98271aa71b03f50c8302250a3bad1e75601f3f063c96c11"
	I1212 00:34:54.179511  298931 cri.go:89] found id: "b88c2733616198eb832ac5f6598b52771ff77dc17d732bbee216a8ceba1085c9"
	I1212 00:34:54.179520  298931 cri.go:89] found id: "3787e40b4cb5a662778290c05ca00d3817fdfbe9b3c1be44d55d057774bc5b3f"
	I1212 00:34:54.179527  298931 cri.go:89] found id: "ba47d0e7ab02f71497aa68c85a3a258865d44c5f0904dd855bffcd51ab9bcb6f"
	I1212 00:34:54.179534  298931 cri.go:89] found id: "fc635a362d3ce8418e951572f54c063d0f426b6b9593da951cb6600cd1fcfc3b"
	I1212 00:34:54.179539  298931 cri.go:89] found id: ""
	I1212 00:34:54.179573  298931 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:34:54.195790  298931 out.go:203] 
	W1212 00:34:54.197088  298931 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:34:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:34:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:34:54.197107  298931 out.go:285] * 
	* 
	W1212 00:34:54.203632  298931 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:34:54.204856  298931 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-675290 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-675290
helpers_test.go:244: (dbg) docker inspect no-preload-675290:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d",
	        "Created": "2025-12-12T00:32:58.309247922Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 290282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:34:15.207974428Z",
	            "FinishedAt": "2025-12-12T00:34:14.332069782Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d/hostname",
	        "HostsPath": "/var/lib/docker/containers/822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d/hosts",
	        "LogPath": "/var/lib/docker/containers/822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d/822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d-json.log",
	        "Name": "/no-preload-675290",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-675290:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-675290",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d",
	                "LowerDir": "/var/lib/docker/overlay2/526e2f5333a3d044d97b913738c91ecc06a6a6ea28306358c63bf56b192a3e75-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/526e2f5333a3d044d97b913738c91ecc06a6a6ea28306358c63bf56b192a3e75/merged",
	                "UpperDir": "/var/lib/docker/overlay2/526e2f5333a3d044d97b913738c91ecc06a6a6ea28306358c63bf56b192a3e75/diff",
	                "WorkDir": "/var/lib/docker/overlay2/526e2f5333a3d044d97b913738c91ecc06a6a6ea28306358c63bf56b192a3e75/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-675290",
	                "Source": "/var/lib/docker/volumes/no-preload-675290/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-675290",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-675290",
	                "name.minikube.sigs.k8s.io": "no-preload-675290",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3cd2855e8cdd58bb1334faa202cfce0327a38e4c8320d89e185fe6fc185ef2dc",
	            "SandboxKey": "/var/run/docker/netns/3cd2855e8cdd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-675290": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f766d8223619c67c6480629ff7786cf2f3559f1e416095164a10f67db0a3ed9d",
	                    "EndpointID": "3e662c4ba30a20818a28363ffbd3f245f825502a74dcff3f408e0835ce620338",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "72:8b:6b:43:3e:ce",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-675290",
	                        "822239fdcf28"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675290 -n no-preload-675290
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675290 -n no-preload-675290: exit status 2 (328.452699ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-675290 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-675290 logs -n 25: (1.117666444s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ delete  │ -p running-upgrade-299658                                                                                                                                                                                                                     │ running-upgrade-299658 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ delete  │ -p cert-expiration-673665                                                                                                                                                                                                                     │ cert-expiration-673665 │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:32 UTC │
	│ start   │ -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ cert-options-319518 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-319518    │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ -p cert-options-319518 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-319518    │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ delete  │ -p cert-options-319518                                                                                                                                                                                                                        │ cert-options-319518    │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-858659     │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-743506 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ stop    │ -p old-k8s-version-743506 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable metrics-server -p no-preload-675290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ stop    │ -p no-preload-675290 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-858659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-858659     │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ stop    │ -p embed-certs-858659 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-858659     │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-743506 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p no-preload-675290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-858659 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-858659     │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-858659     │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ image   │ old-k8s-version-743506 image list --format=json                                                                                                                                                                                               │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p old-k8s-version-743506 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p old-k8s-version-743506                                                                                                                                                                                                                     │ old-k8s-version-743506 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ image   │ no-preload-675290 image list --format=json                                                                                                                                                                                                    │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p no-preload-675290 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-675290      │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:34:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:34:22.194991  292217 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:34:22.195276  292217 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:34:22.195286  292217 out.go:374] Setting ErrFile to fd 2...
	I1212 00:34:22.195290  292217 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:34:22.195461  292217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:34:22.195951  292217 out.go:368] Setting JSON to false
	I1212 00:34:22.197248  292217 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4608,"bootTime":1765495054,"procs":335,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:34:22.197307  292217 start.go:143] virtualization: kvm guest
	I1212 00:34:22.199590  292217 out.go:179] * [embed-certs-858659] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:34:22.200695  292217 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:34:22.200770  292217 notify.go:221] Checking for updates...
	I1212 00:34:22.202938  292217 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:34:22.205663  292217 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:34:22.207024  292217 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:34:22.208159  292217 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:34:22.209335  292217 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:34:22.211048  292217 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:34:22.211868  292217 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:34:22.238149  292217 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:34:22.238284  292217 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:34:22.300345  292217 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-12 00:34:22.28954278 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:34:22.300452  292217 docker.go:319] overlay module found
	I1212 00:34:22.302189  292217 out.go:179] * Using the docker driver based on existing profile
	I1212 00:34:22.303236  292217 start.go:309] selected driver: docker
	I1212 00:34:22.303252  292217 start.go:927] validating driver "docker" against &{Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:34:22.303357  292217 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:34:22.304085  292217 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:34:22.364908  292217 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-12 00:34:22.355220994 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:34:22.365186  292217 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:34:22.365208  292217 cni.go:84] Creating CNI manager for ""
	I1212 00:34:22.365254  292217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:34:22.365282  292217 start.go:353] cluster config:
	{Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:34:22.367044  292217 out.go:179] * Starting "embed-certs-858659" primary control-plane node in "embed-certs-858659" cluster
	I1212 00:34:22.368095  292217 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:34:22.369191  292217 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:34:22.370219  292217 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:34:22.370254  292217 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 00:34:22.370268  292217 cache.go:65] Caching tarball of preloaded images
	I1212 00:34:22.370312  292217 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:34:22.370362  292217 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:34:22.370377  292217 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 00:34:22.370514  292217 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/config.json ...
	I1212 00:34:22.391729  292217 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:34:22.391750  292217 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:34:22.391769  292217 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:34:22.391800  292217 start.go:360] acquireMachinesLock for embed-certs-858659: {Name:mk65733daa8eb01c9a3ad2d27b0888c2a1a8b319 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:34:22.391881  292217 start.go:364] duration metric: took 47.626µs to acquireMachinesLock for "embed-certs-858659"
	I1212 00:34:22.391906  292217 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:34:22.391912  292217 fix.go:54] fixHost starting: 
	I1212 00:34:22.392190  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:22.410761  292217 fix.go:112] recreateIfNeeded on embed-certs-858659: state=Stopped err=<nil>
	W1212 00:34:22.410787  292217 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:34:17.930547  287750 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 00:34:17.934787  287750 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1212 00:34:17.935919  287750 api_server.go:141] control plane version: v1.28.0
	I1212 00:34:17.935939  287750 api_server.go:131] duration metric: took 506.401624ms to wait for apiserver health ...
	I1212 00:34:17.935961  287750 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:34:17.939387  287750 system_pods.go:59] 8 kube-system pods found
	I1212 00:34:17.939432  287750 system_pods.go:61] "coredns-5dd5756b68-nxwdc" [e73711a2-208b-41a6-a47f-6253638cfdf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:17.939448  287750 system_pods.go:61] "etcd-old-k8s-version-743506" [945b4dc3-f44c-48ec-8a03-ec0d012b5e09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:17.939461  287750 system_pods.go:61] "kindnet-s2gvw" [21d0881b-1da3-4a1d-967d-8f108d5d8a1f] Running
	I1212 00:34:17.939470  287750 system_pods.go:61] "kube-apiserver-old-k8s-version-743506" [13ddd011-c962-4b44-8b26-b80bf8df1e13] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:17.939494  287750 system_pods.go:61] "kube-controller-manager-old-k8s-version-743506" [3df133c8-f899-4f35-b4bf-d1849a7262e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:17.939506  287750 system_pods.go:61] "kube-proxy-pz8kt" [671c52f4-19ce-4b7b-8e97-e43f64cd4aeb] Running
	I1212 00:34:17.939514  287750 system_pods.go:61] "kube-scheduler-old-k8s-version-743506" [3e6b9a7d-5346-47dd-af74-f9f8b6904163] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:17.939523  287750 system_pods.go:61] "storage-provisioner" [ccc4d4a0-b9c6-4653-90dc-113128acc782] Running
	I1212 00:34:17.939531  287750 system_pods.go:74] duration metric: took 3.56333ms to wait for pod list to return data ...
	I1212 00:34:17.939542  287750 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:34:17.941406  287750 default_sa.go:45] found service account: "default"
	I1212 00:34:17.941423  287750 default_sa.go:55] duration metric: took 1.872906ms for default service account to be created ...
	I1212 00:34:17.941431  287750 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:34:17.944007  287750 system_pods.go:86] 8 kube-system pods found
	I1212 00:34:17.944034  287750 system_pods.go:89] "coredns-5dd5756b68-nxwdc" [e73711a2-208b-41a6-a47f-6253638cfdf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:17.944044  287750 system_pods.go:89] "etcd-old-k8s-version-743506" [945b4dc3-f44c-48ec-8a03-ec0d012b5e09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:17.944052  287750 system_pods.go:89] "kindnet-s2gvw" [21d0881b-1da3-4a1d-967d-8f108d5d8a1f] Running
	I1212 00:34:17.944060  287750 system_pods.go:89] "kube-apiserver-old-k8s-version-743506" [13ddd011-c962-4b44-8b26-b80bf8df1e13] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:17.944069  287750 system_pods.go:89] "kube-controller-manager-old-k8s-version-743506" [3df133c8-f899-4f35-b4bf-d1849a7262e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:17.944079  287750 system_pods.go:89] "kube-proxy-pz8kt" [671c52f4-19ce-4b7b-8e97-e43f64cd4aeb] Running
	I1212 00:34:17.944088  287750 system_pods.go:89] "kube-scheduler-old-k8s-version-743506" [3e6b9a7d-5346-47dd-af74-f9f8b6904163] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:17.944094  287750 system_pods.go:89] "storage-provisioner" [ccc4d4a0-b9c6-4653-90dc-113128acc782] Running
	I1212 00:34:17.944105  287750 system_pods.go:126] duration metric: took 2.668947ms to wait for k8s-apps to be running ...
	I1212 00:34:17.944116  287750 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:34:17.944183  287750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:34:17.956567  287750 system_svc.go:56] duration metric: took 12.447031ms WaitForService to wait for kubelet
	I1212 00:34:17.956589  287750 kubeadm.go:587] duration metric: took 3.757690609s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:34:17.956609  287750 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:34:17.958554  287750 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:34:17.958575  287750 node_conditions.go:123] node cpu capacity is 8
	I1212 00:34:17.958593  287750 node_conditions.go:105] duration metric: took 1.974354ms to run NodePressure ...
	I1212 00:34:17.958607  287750 start.go:242] waiting for startup goroutines ...
	I1212 00:34:17.958622  287750 start.go:247] waiting for cluster config update ...
	I1212 00:34:17.958640  287750 start.go:256] writing updated cluster config ...
	I1212 00:34:17.958881  287750 ssh_runner.go:195] Run: rm -f paused
	I1212 00:34:17.962438  287750 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:34:17.965710  287750 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-nxwdc" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 00:34:19.971812  287750 pod_ready.go:104] pod "coredns-5dd5756b68-nxwdc" is not "Ready", error: node "old-k8s-version-743506" hosting pod "coredns-5dd5756b68-nxwdc" is not "Ready" (will retry)
	W1212 00:34:21.972884  287750 pod_ready.go:104] pod "coredns-5dd5756b68-nxwdc" is not "Ready", error: node "old-k8s-version-743506" hosting pod "coredns-5dd5756b68-nxwdc" is not "Ready" (will retry)
	I1212 00:34:21.825329  290093 cli_runner.go:164] Run: docker container inspect no-preload-675290 --format={{.State.Status}}
	I1212 00:34:21.826261  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 00:34:21.826281  290093 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 00:34:21.826370  290093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-675290
	I1212 00:34:21.849641  290093 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:34:21.849646  290093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/no-preload-675290/id_rsa Username:docker}
	I1212 00:34:21.849664  290093 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:34:21.849826  290093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-675290
	I1212 00:34:21.851877  290093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/no-preload-675290/id_rsa Username:docker}
	I1212 00:34:21.890953  290093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/no-preload-675290/id_rsa Username:docker}
	I1212 00:34:21.983765  290093 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:34:21.984639  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 00:34:21.984658  290093 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 00:34:21.992024  290093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:34:22.003673  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 00:34:22.003694  290093 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 00:34:22.005666  290093 node_ready.go:35] waiting up to 6m0s for node "no-preload-675290" to be "Ready" ...
	I1212 00:34:22.016144  290093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:34:22.022417  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 00:34:22.022439  290093 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 00:34:22.043609  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 00:34:22.043666  290093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 00:34:22.064203  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 00:34:22.064230  290093 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 00:34:22.081605  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 00:34:22.081626  290093 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 00:34:22.098674  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 00:34:22.098715  290093 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 00:34:22.115963  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 00:34:22.115987  290093 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 00:34:22.132839  290093 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:34:22.132864  290093 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 00:34:22.148773  290093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:34:22.954440  290093 node_ready.go:49] node "no-preload-675290" is "Ready"
	I1212 00:34:22.954493  290093 node_ready.go:38] duration metric: took 948.786308ms for node "no-preload-675290" to be "Ready" ...
	I1212 00:34:22.954513  290093 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:34:22.954568  290093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:34:23.460200  290093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.468149091s)
	I1212 00:34:23.460313  290093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.444116968s)
	I1212 00:34:23.460400  290093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.311593953s)
	I1212 00:34:23.460429  290093 api_server.go:72] duration metric: took 1.662216059s to wait for apiserver process to appear ...
	I1212 00:34:23.460441  290093 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:34:23.460498  290093 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:34:23.461985  290093 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-675290 addons enable metrics-server
	
	I1212 00:34:23.465588  290093 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:34:23.465612  290093 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:34:23.466996  290093 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1212 00:34:23.468160  290093 addons.go:530] duration metric: took 1.669912774s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1212 00:34:23.961381  290093 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:34:23.967063  290093 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:34:23.967088  290093 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:34:24.460610  290093 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:34:24.465379  290093 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 00:34:24.466274  290093 api_server.go:141] control plane version: v1.35.0-beta.0
	I1212 00:34:24.466296  290093 api_server.go:131] duration metric: took 1.005848918s to wait for apiserver health ...
	I1212 00:34:24.466304  290093 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:34:24.469960  290093 system_pods.go:59] 8 kube-system pods found
	I1212 00:34:24.470002  290093 system_pods.go:61] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:24.470020  290093 system_pods.go:61] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:24.470030  290093 system_pods.go:61] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:34:24.470042  290093 system_pods.go:61] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:24.470052  290093 system_pods.go:61] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:24.470065  290093 system_pods.go:61] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:34:24.470075  290093 system_pods.go:61] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:24.470083  290093 system_pods.go:61] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Running
	I1212 00:34:24.470095  290093 system_pods.go:74] duration metric: took 3.783504ms to wait for pod list to return data ...
	I1212 00:34:24.470107  290093 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:34:24.472424  290093 default_sa.go:45] found service account: "default"
	I1212 00:34:24.472446  290093 default_sa.go:55] duration metric: took 2.32759ms for default service account to be created ...
	I1212 00:34:24.472455  290093 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:34:24.474765  290093 system_pods.go:86] 8 kube-system pods found
	I1212 00:34:24.474789  290093 system_pods.go:89] "coredns-7d764666f9-44t4m" [cceb1c43-32c8-4878-8afd-9cffbf61ad07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:24.474797  290093 system_pods.go:89] "etcd-no-preload-675290" [1accd1d9-e622-4fcf-94b7-7b741ae7396c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:24.474802  290093 system_pods.go:89] "kindnet-ng47n" [a3a49761-52d7-4b77-a861-af908cd83f4d] Running
	I1212 00:34:24.474807  290093 system_pods.go:89] "kube-apiserver-no-preload-675290" [3999894f-9e53-41be-92c3-8a1e80acc865] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:24.474812  290093 system_pods.go:89] "kube-controller-manager-no-preload-675290" [d58f09a0-73a8-4ace-9153-a64de4b9ee18] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:24.474819  290093 system_pods.go:89] "kube-proxy-7pxpp" [57d08f79-0e03-4148-9724-cac54cc3a437] Running
	I1212 00:34:24.474824  290093 system_pods.go:89] "kube-scheduler-no-preload-675290" [a8821052-7fb4-4fb2-adaa-96cdfd57a028] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:24.474828  290093 system_pods.go:89] "storage-provisioner" [c0391e3a-aabe-4074-a617-136990bd5fb4] Running
	I1212 00:34:24.474837  290093 system_pods.go:126] duration metric: took 2.375958ms to wait for k8s-apps to be running ...
	I1212 00:34:24.474842  290093 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:34:24.474880  290093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:34:24.487107  290093 system_svc.go:56] duration metric: took 12.259625ms WaitForService to wait for kubelet
	I1212 00:34:24.487123  290093 kubeadm.go:587] duration metric: took 2.688911743s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:34:24.487151  290093 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:34:24.489298  290093 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:34:24.489318  290093 node_conditions.go:123] node cpu capacity is 8
	I1212 00:34:24.489333  290093 node_conditions.go:105] duration metric: took 2.175894ms to run NodePressure ...
	I1212 00:34:24.489343  290093 start.go:242] waiting for startup goroutines ...
	I1212 00:34:24.489352  290093 start.go:247] waiting for cluster config update ...
	I1212 00:34:24.489362  290093 start.go:256] writing updated cluster config ...
	I1212 00:34:24.489611  290093 ssh_runner.go:195] Run: rm -f paused
	I1212 00:34:24.493607  290093 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:34:24.496951  290093 pod_ready.go:83] waiting for pod "coredns-7d764666f9-44t4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:22.412332  292217 out.go:252] * Restarting existing docker container for "embed-certs-858659" ...
	I1212 00:34:22.412393  292217 cli_runner.go:164] Run: docker start embed-certs-858659
	I1212 00:34:22.675395  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:22.697554  292217 kic.go:430] container "embed-certs-858659" state is running.
	I1212 00:34:22.698003  292217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:34:22.721205  292217 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/config.json ...
	I1212 00:34:22.721434  292217 machine.go:94] provisionDockerMachine start ...
	I1212 00:34:22.721530  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:22.740223  292217 main.go:143] libmachine: Using SSH client type: native
	I1212 00:34:22.740531  292217 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1212 00:34:22.740552  292217 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:34:22.741123  292217 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48100->127.0.0.1:33083: read: connection reset by peer
	I1212 00:34:25.873905  292217 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-858659
	
	I1212 00:34:25.873938  292217 ubuntu.go:182] provisioning hostname "embed-certs-858659"
	I1212 00:34:25.874010  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:25.891640  292217 main.go:143] libmachine: Using SSH client type: native
	I1212 00:34:25.891843  292217 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1212 00:34:25.891854  292217 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-858659 && echo "embed-certs-858659" | sudo tee /etc/hostname
	I1212 00:34:26.033680  292217 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-858659
	
	I1212 00:34:26.033749  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:26.054661  292217 main.go:143] libmachine: Using SSH client type: native
	I1212 00:34:26.054969  292217 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1212 00:34:26.055001  292217 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-858659' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-858659/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-858659' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:34:26.193045  292217 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:34:26.193085  292217 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:34:26.193135  292217 ubuntu.go:190] setting up certificates
	I1212 00:34:26.193149  292217 provision.go:84] configureAuth start
	I1212 00:34:26.193222  292217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:34:26.210729  292217 provision.go:143] copyHostCerts
	I1212 00:34:26.210790  292217 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:34:26.210805  292217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:34:26.210864  292217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:34:26.211018  292217 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:34:26.211030  292217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:34:26.211064  292217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:34:26.211138  292217 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:34:26.211145  292217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:34:26.211176  292217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:34:26.211239  292217 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.embed-certs-858659 san=[127.0.0.1 192.168.94.2 embed-certs-858659 localhost minikube]
	I1212 00:34:26.334330  292217 provision.go:177] copyRemoteCerts
	I1212 00:34:26.334387  292217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:34:26.334432  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:26.352293  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:26.448550  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:34:26.465534  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:34:26.482790  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:34:26.500628  292217 provision.go:87] duration metric: took 307.45892ms to configureAuth
	I1212 00:34:26.500654  292217 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:34:26.500854  292217 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:34:26.500972  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:26.518572  292217 main.go:143] libmachine: Using SSH client type: native
	I1212 00:34:26.518811  292217 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1212 00:34:26.518834  292217 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:34:26.850738  292217 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:34:26.850803  292217 machine.go:97] duration metric: took 4.12935252s to provisionDockerMachine
	I1212 00:34:26.850819  292217 start.go:293] postStartSetup for "embed-certs-858659" (driver="docker")
	I1212 00:34:26.850842  292217 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:34:26.850914  292217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:34:26.850984  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:26.871065  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:26.966453  292217 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:34:26.970137  292217 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:34:26.970162  292217 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:34:26.970172  292217 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:34:26.970227  292217 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:34:26.970325  292217 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:34:26.970442  292217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:34:26.978705  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:34:26.995716  292217 start.go:296] duration metric: took 144.870061ms for postStartSetup
	I1212 00:34:26.995782  292217 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:34:26.995835  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:27.014285  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:27.105922  292217 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:34:27.110345  292217 fix.go:56] duration metric: took 4.718428372s for fixHost
	I1212 00:34:27.110371  292217 start.go:83] releasing machines lock for "embed-certs-858659", held for 4.718475367s
	I1212 00:34:27.110437  292217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-858659
	I1212 00:34:27.127373  292217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:34:27.127396  292217 ssh_runner.go:195] Run: cat /version.json
	I1212 00:34:27.127437  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:27.127445  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:27.144516  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:27.145862  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	W1212 00:34:24.471877  287750 pod_ready.go:104] pod "coredns-5dd5756b68-nxwdc" is not "Ready", error: node "old-k8s-version-743506" hosting pod "coredns-5dd5756b68-nxwdc" is not "Ready" (will retry)
	I1212 00:34:26.971400  287750 pod_ready.go:94] pod "coredns-5dd5756b68-nxwdc" is "Ready"
	I1212 00:34:26.971427  287750 pod_ready.go:86] duration metric: took 9.005696281s for pod "coredns-5dd5756b68-nxwdc" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:26.974242  287750 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:27.236084  292217 ssh_runner.go:195] Run: systemctl --version
	I1212 00:34:27.289011  292217 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:34:27.320985  292217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:34:27.325714  292217 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:34:27.325777  292217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:34:27.334535  292217 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:34:27.334554  292217 start.go:496] detecting cgroup driver to use...
	I1212 00:34:27.334579  292217 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:34:27.334633  292217 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:34:27.348435  292217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:34:27.359652  292217 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:34:27.359703  292217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:34:27.374109  292217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:34:27.386469  292217 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:34:27.460054  292217 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:34:27.535041  292217 docker.go:234] disabling docker service ...
	I1212 00:34:27.535088  292217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:34:27.548165  292217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:34:27.559573  292217 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:34:27.632790  292217 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:34:27.713620  292217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:34:27.725354  292217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:34:27.738210  292217 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:34:27.738258  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.746385  292217 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:34:27.746427  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.754504  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.762234  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.770331  292217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:34:27.777754  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.785740  292217 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.793156  292217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:27.800928  292217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:34:27.807604  292217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:34:27.814250  292217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:34:27.892059  292217 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:34:28.023411  292217 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:34:28.023508  292217 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:34:28.027324  292217 start.go:564] Will wait 60s for crictl version
	I1212 00:34:28.027377  292217 ssh_runner.go:195] Run: which crictl
	I1212 00:34:28.030733  292217 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:34:28.053419  292217 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:34:28.053521  292217 ssh_runner.go:195] Run: crio --version
	I1212 00:34:28.078218  292217 ssh_runner.go:195] Run: crio --version
	I1212 00:34:28.104986  292217 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:34:28.106118  292217 cli_runner.go:164] Run: docker network inspect embed-certs-858659 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:34:28.122845  292217 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1212 00:34:28.127252  292217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:34:28.136929  292217 kubeadm.go:884] updating cluster {Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:34:28.137027  292217 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:34:28.137068  292217 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:34:28.166606  292217 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:34:28.166625  292217 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:34:28.166660  292217 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:34:28.189997  292217 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:34:28.190014  292217 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:34:28.190022  292217 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1212 00:34:28.190122  292217 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-858659 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:34:28.190197  292217 ssh_runner.go:195] Run: crio config
	I1212 00:34:28.233455  292217 cni.go:84] Creating CNI manager for ""
	I1212 00:34:28.233501  292217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:34:28.233520  292217 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:34:28.233549  292217 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-858659 NodeName:embed-certs-858659 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:34:28.233667  292217 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-858659"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:34:28.233728  292217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:34:28.241238  292217 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:34:28.241286  292217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:34:28.248529  292217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1212 00:34:28.259931  292217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:34:28.272064  292217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1212 00:34:28.283961  292217 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:34:28.287295  292217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:34:28.296174  292217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:34:28.374313  292217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:34:28.399445  292217 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659 for IP: 192.168.94.2
	I1212 00:34:28.399469  292217 certs.go:195] generating shared ca certs ...
	I1212 00:34:28.399502  292217 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:28.399682  292217 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:34:28.399740  292217 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:34:28.399754  292217 certs.go:257] generating profile certs ...
	I1212 00:34:28.399858  292217 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/client.key
	I1212 00:34:28.399921  292217 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key.89584afc
	I1212 00:34:28.399969  292217 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key
	I1212 00:34:28.400101  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:34:28.400154  292217 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:34:28.400167  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:34:28.400199  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:34:28.400232  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:34:28.400265  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:34:28.400324  292217 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:34:28.401140  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:34:28.418445  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:34:28.435360  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:34:28.454053  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:34:28.476744  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 00:34:28.494464  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:34:28.512911  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:34:28.532124  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/embed-certs-858659/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:34:28.550167  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:34:28.569797  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:34:28.588171  292217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:34:28.604980  292217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:34:28.616294  292217 ssh_runner.go:195] Run: openssl version
	I1212 00:34:28.621964  292217 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:34:28.628771  292217 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:34:28.635451  292217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:34:28.638885  292217 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:34:28.638945  292217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:34:28.673421  292217 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:34:28.680066  292217 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:28.686678  292217 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:34:28.693326  292217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:28.696643  292217 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:28.696676  292217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:28.730803  292217 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:34:28.737734  292217 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:34:28.744631  292217 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:34:28.751977  292217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:34:28.755400  292217 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:34:28.755447  292217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:34:28.789571  292217 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:34:28.796415  292217 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:34:28.799981  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:34:28.833241  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:34:28.867120  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:34:28.906082  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:34:28.954312  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:34:28.999466  292217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:34:29.061379  292217 kubeadm.go:401] StartCluster: {Name:embed-certs-858659 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-858659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:34:29.061506  292217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:34:29.061568  292217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:34:29.094454  292217 cri.go:89] found id: "6eb73517ebee81333693231ad0296e35a758eeb5aa9cc45c7ee7663ea6e652e4"
	I1212 00:34:29.094504  292217 cri.go:89] found id: "07f4a35d8d4d1bc19d0c0dc6b015f381bc482dc379a9a416e57528498aa8e1e5"
	I1212 00:34:29.094512  292217 cri.go:89] found id: "a4dfa21dd3b082f3a9464428da14520464aa97d1551c9a8ddffbf56d063877a6"
	I1212 00:34:29.094516  292217 cri.go:89] found id: "3da8c9c634a0528028b8a0875544b6a0c41e72dc8bf1ff1da95beccf80094376"
	I1212 00:34:29.094520  292217 cri.go:89] found id: ""
	I1212 00:34:29.094580  292217 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 00:34:29.106285  292217 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:34:29Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:34:29.106353  292217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:34:29.113726  292217 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:34:29.113742  292217 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:34:29.113783  292217 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:34:29.120655  292217 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:34:29.121366  292217 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-858659" does not appear in /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:34:29.121810  292217 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-10975/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-858659" cluster setting kubeconfig missing "embed-certs-858659" context setting]
	I1212 00:34:29.122410  292217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:29.123956  292217 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:34:29.131575  292217 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1212 00:34:29.131601  292217 kubeadm.go:602] duration metric: took 17.853493ms to restartPrimaryControlPlane
	I1212 00:34:29.131610  292217 kubeadm.go:403] duration metric: took 70.240665ms to StartCluster
	I1212 00:34:29.131624  292217 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:29.131695  292217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:34:29.133806  292217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:29.134050  292217 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:34:29.134111  292217 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:34:29.134220  292217 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-858659"
	I1212 00:34:29.134242  292217 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-858659"
	W1212 00:34:29.134251  292217 addons.go:248] addon storage-provisioner should already be in state true
	I1212 00:34:29.134244  292217 addons.go:70] Setting dashboard=true in profile "embed-certs-858659"
	I1212 00:34:29.134268  292217 addons.go:239] Setting addon dashboard=true in "embed-certs-858659"
	I1212 00:34:29.134259  292217 addons.go:70] Setting default-storageclass=true in profile "embed-certs-858659"
	W1212 00:34:29.134278  292217 addons.go:248] addon dashboard should already be in state true
	I1212 00:34:29.134290  292217 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:34:29.134294  292217 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:34:29.134312  292217 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:34:29.134291  292217 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-858659"
	I1212 00:34:29.134698  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:29.134803  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:29.134819  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:29.135940  292217 out.go:179] * Verifying Kubernetes components...
	I1212 00:34:29.137294  292217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:34:29.160012  292217 addons.go:239] Setting addon default-storageclass=true in "embed-certs-858659"
	W1212 00:34:29.160036  292217 addons.go:248] addon default-storageclass should already be in state true
	I1212 00:34:29.160062  292217 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:34:29.160531  292217 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:34:29.161454  292217 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:34:29.162569  292217 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 00:34:29.162613  292217 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:34:29.162631  292217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:34:29.162683  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:29.164828  292217 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1212 00:34:26.502049  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: node "no-preload-675290" hosting pod "coredns-7d764666f9-44t4m" is not "Ready" (will retry)
	W1212 00:34:28.503136  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: node "no-preload-675290" hosting pod "coredns-7d764666f9-44t4m" is not "Ready" (will retry)
	I1212 00:34:29.166119  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 00:34:29.166135  292217 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 00:34:29.166188  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:29.192491  292217 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:34:29.192516  292217 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:34:29.192574  292217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:34:29.196179  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:29.198591  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:29.216901  292217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:34:29.287840  292217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:34:29.302952  292217 node_ready.go:35] waiting up to 6m0s for node "embed-certs-858659" to be "Ready" ...
	I1212 00:34:29.317900  292217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:34:29.321896  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 00:34:29.321912  292217 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 00:34:29.343648  292217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:34:29.344330  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 00:34:29.344372  292217 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 00:34:29.363923  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 00:34:29.363955  292217 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 00:34:29.381875  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 00:34:29.381897  292217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 00:34:29.396784  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 00:34:29.396803  292217 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 00:34:29.410654  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 00:34:29.410676  292217 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 00:34:29.425501  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 00:34:29.425524  292217 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 00:34:29.440231  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 00:34:29.440252  292217 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 00:34:29.452746  292217 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:34:29.452766  292217 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 00:34:29.466329  292217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:34:30.818434  292217 node_ready.go:49] node "embed-certs-858659" is "Ready"
	I1212 00:34:30.818487  292217 node_ready.go:38] duration metric: took 1.515392528s for node "embed-certs-858659" to be "Ready" ...
	I1212 00:34:30.818508  292217 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:34:30.818565  292217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:34:31.468911  292217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.150974747s)
	I1212 00:34:31.468978  292217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.125284091s)
	I1212 00:34:31.469122  292217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.0027624s)
	I1212 00:34:31.469455  292217 api_server.go:72] duration metric: took 2.335374756s to wait for apiserver process to appear ...
	I1212 00:34:31.469505  292217 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:34:31.469524  292217 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 00:34:31.473590  292217 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-858659 addons enable metrics-server
	
	I1212 00:34:31.476459  292217 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:34:31.476505  292217 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:34:31.483691  292217 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1212 00:34:30.508733  263844 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.062075524s)
	W1212 00:34:30.508779  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1212 00:34:30.508790  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:30.508810  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:30.546082  263844 logs.go:123] Gathering logs for kube-apiserver [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106] ...
	I1212 00:34:30.546123  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:34:30.579092  263844 logs.go:123] Gathering logs for kube-controller-manager [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2] ...
	I1212 00:34:30.579118  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:30.604859  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:30.604882  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:30.657742  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:30.657769  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:30.671365  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:30.671388  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:30.705424  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:30.705450  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:31.484807  292217 addons.go:530] duration metric: took 2.350700971s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1212 00:34:31.969645  292217 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 00:34:31.975159  292217 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:34:31.975202  292217 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:34:28.982220  287750 pod_ready.go:104] pod "etcd-old-k8s-version-743506" is not "Ready", error: <nil>
	W1212 00:34:31.481681  287750 pod_ready.go:104] pod "etcd-old-k8s-version-743506" is not "Ready", error: <nil>
	I1212 00:34:31.981277  287750 pod_ready.go:94] pod "etcd-old-k8s-version-743506" is "Ready"
	I1212 00:34:31.981308  287750 pod_ready.go:86] duration metric: took 5.007040467s for pod "etcd-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:31.985958  287750 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:31.993506  287750 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-743506" is "Ready"
	I1212 00:34:31.993527  287750 pod_ready.go:86] duration metric: took 7.548054ms for pod "kube-apiserver-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:31.998043  287750 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.003355  287750 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-743506" is "Ready"
	I1212 00:34:32.003413  287750 pod_ready.go:86] duration metric: took 5.344333ms for pod "kube-controller-manager-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.006576  287750 pod_ready.go:83] waiting for pod "kube-proxy-pz8kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.178403  287750 pod_ready.go:94] pod "kube-proxy-pz8kt" is "Ready"
	I1212 00:34:32.178429  287750 pod_ready.go:86] duration metric: took 171.831568ms for pod "kube-proxy-pz8kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.379015  287750 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.778296  287750 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-743506" is "Ready"
	I1212 00:34:32.778333  287750 pod_ready.go:86] duration metric: took 399.28376ms for pod "kube-scheduler-old-k8s-version-743506" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:32.778358  287750 pod_ready.go:40] duration metric: took 14.8158908s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:34:32.833106  287750 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1212 00:34:32.835985  287750 out.go:203] 
	W1212 00:34:32.837089  287750 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1212 00:34:32.838320  287750 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1212 00:34:32.839516  287750 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-743506" cluster and "default" namespace by default
	W1212 00:34:31.007103  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: node "no-preload-675290" hosting pod "coredns-7d764666f9-44t4m" is not "Ready" (will retry)
	W1212 00:34:33.503221  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: <nil>
	I1212 00:34:33.249325  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:34.659371  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:57038->192.168.85.2:8443: read: connection reset by peer
	I1212 00:34:34.659452  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:34.659598  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:34.720459  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:34.720562  263844 cri.go:89] found id: "e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:34:34.720572  263844 cri.go:89] found id: ""
	I1212 00:34:34.720583  263844 logs.go:282] 2 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106]
	I1212 00:34:34.720649  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:34.727624  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:34.732978  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:34.733038  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:34.771884  263844 cri.go:89] found id: ""
	I1212 00:34:34.771911  263844 logs.go:282] 0 containers: []
	W1212 00:34:34.771923  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:34.771930  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:34.771985  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:34.813253  263844 cri.go:89] found id: ""
	I1212 00:34:34.813292  263844 logs.go:282] 0 containers: []
	W1212 00:34:34.813304  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:34.813313  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:34.813375  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:34.854049  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:34.854075  263844 cri.go:89] found id: ""
	I1212 00:34:34.854084  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:34.854152  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:34.860190  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:34.860258  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:34.898840  263844 cri.go:89] found id: ""
	I1212 00:34:34.898872  263844 logs.go:282] 0 containers: []
	W1212 00:34:34.898883  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:34.898891  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:34.898952  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:34:34.937834  263844 cri.go:89] found id: "92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:34.937859  263844 cri.go:89] found id: ""
	I1212 00:34:34.937869  263844 logs.go:282] 1 containers: [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2]
	I1212 00:34:34.937925  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:34.944202  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:34:34.944414  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:34:34.982162  263844 cri.go:89] found id: ""
	I1212 00:34:34.982222  263844 logs.go:282] 0 containers: []
	W1212 00:34:34.982233  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:34:34.982250  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:34:34.982348  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:34:35.028882  263844 cri.go:89] found id: ""
	I1212 00:34:35.028907  263844 logs.go:282] 0 containers: []
	W1212 00:34:35.028919  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:34:35.028935  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:35.028955  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:35.123296  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:35.123511  263844 logs.go:123] Gathering logs for kube-apiserver [e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106] ...
	I1212 00:34:35.123684  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5c7a94f9e6437b4be8db0f112af551f48940b791c05af7eb772e98af8802106"
	I1212 00:34:35.174086  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:35.174189  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:35.261332  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:35.261372  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:35.311106  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:34:35.311138  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:34:35.426171  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:35.426206  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:35.471138  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:35.471171  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:35.510352  263844 logs.go:123] Gathering logs for kube-controller-manager [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2] ...
	I1212 00:34:35.510384  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:35.548527  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:35.548558  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:32.469729  292217 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 00:34:32.474936  292217 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1212 00:34:32.475992  292217 api_server.go:141] control plane version: v1.34.2
	I1212 00:34:32.476016  292217 api_server.go:131] duration metric: took 1.006503678s to wait for apiserver health ...
	I1212 00:34:32.476025  292217 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:34:32.479628  292217 system_pods.go:59] 8 kube-system pods found
	I1212 00:34:32.479664  292217 system_pods.go:61] "coredns-66bc5c9577-8x66p" [1e3ac279-c897-4100-aa49-a94ed95d1b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:32.479681  292217 system_pods.go:61] "etcd-embed-certs-858659" [a6f85bfe-d87e-4cd1-a34a-6517b835d825] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:32.479695  292217 system_pods.go:61] "kindnet-9jvdg" [295eca47-46bb-43bf-981b-7320ba579410] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 00:34:32.479711  292217 system_pods.go:61] "kube-apiserver-embed-certs-858659" [b0ed7f59-ed08-4325-a71e-b6fc09d2f588] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:32.479726  292217 system_pods.go:61] "kube-controller-manager-embed-certs-858659" [c5be6475-d2da-40b2-97eb-4df8cd8a51db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:32.479738  292217 system_pods.go:61] "kube-proxy-httpr" [d6220e54-3a3a-4fbe-94e1-d0117757204a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 00:34:32.479757  292217 system_pods.go:61] "kube-scheduler-embed-certs-858659" [2c4596a5-b170-4c02-93ef-32fac96600c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:32.479769  292217 system_pods.go:61] "storage-provisioner" [1e3f7607-a2f4-4ca4-84c0-8cffb038ee03] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:34:32.479777  292217 system_pods.go:74] duration metric: took 3.744489ms to wait for pod list to return data ...
	I1212 00:34:32.479789  292217 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:34:32.481998  292217 default_sa.go:45] found service account: "default"
	I1212 00:34:32.482021  292217 default_sa.go:55] duration metric: took 2.221892ms for default service account to be created ...
	I1212 00:34:32.482031  292217 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:34:32.484891  292217 system_pods.go:86] 8 kube-system pods found
	I1212 00:34:32.484922  292217 system_pods.go:89] "coredns-66bc5c9577-8x66p" [1e3ac279-c897-4100-aa49-a94ed95d1b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:34:32.484941  292217 system_pods.go:89] "etcd-embed-certs-858659" [a6f85bfe-d87e-4cd1-a34a-6517b835d825] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:34:32.484954  292217 system_pods.go:89] "kindnet-9jvdg" [295eca47-46bb-43bf-981b-7320ba579410] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 00:34:32.484963  292217 system_pods.go:89] "kube-apiserver-embed-certs-858659" [b0ed7f59-ed08-4325-a71e-b6fc09d2f588] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:34:32.484982  292217 system_pods.go:89] "kube-controller-manager-embed-certs-858659" [c5be6475-d2da-40b2-97eb-4df8cd8a51db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:34:32.484994  292217 system_pods.go:89] "kube-proxy-httpr" [d6220e54-3a3a-4fbe-94e1-d0117757204a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 00:34:32.485002  292217 system_pods.go:89] "kube-scheduler-embed-certs-858659" [2c4596a5-b170-4c02-93ef-32fac96600c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:34:32.485010  292217 system_pods.go:89] "storage-provisioner" [1e3f7607-a2f4-4ca4-84c0-8cffb038ee03] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:34:32.485019  292217 system_pods.go:126] duration metric: took 2.981143ms to wait for k8s-apps to be running ...
	I1212 00:34:32.485026  292217 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:34:32.485080  292217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:34:32.502896  292217 system_svc.go:56] duration metric: took 17.862237ms WaitForService to wait for kubelet
	I1212 00:34:32.502923  292217 kubeadm.go:587] duration metric: took 3.368842736s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:34:32.502943  292217 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:34:32.506066  292217 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:34:32.506091  292217 node_conditions.go:123] node cpu capacity is 8
	I1212 00:34:32.506107  292217 node_conditions.go:105] duration metric: took 3.15793ms to run NodePressure ...
	I1212 00:34:32.506121  292217 start.go:242] waiting for startup goroutines ...
	I1212 00:34:32.506137  292217 start.go:247] waiting for cluster config update ...
	I1212 00:34:32.506152  292217 start.go:256] writing updated cluster config ...
	I1212 00:34:32.506449  292217 ssh_runner.go:195] Run: rm -f paused
	I1212 00:34:32.511014  292217 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:34:32.516674  292217 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8x66p" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 00:34:34.527697  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:34:36.531266  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:34:35.509694  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: <nil>
	W1212 00:34:38.002902  290093 pod_ready.go:104] pod "coredns-7d764666f9-44t4m" is not "Ready", error: <nil>
	I1212 00:34:39.003919  290093 pod_ready.go:94] pod "coredns-7d764666f9-44t4m" is "Ready"
	I1212 00:34:39.003948  290093 pod_ready.go:86] duration metric: took 14.506978089s for pod "coredns-7d764666f9-44t4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.007164  290093 pod_ready.go:83] waiting for pod "etcd-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.012443  290093 pod_ready.go:94] pod "etcd-no-preload-675290" is "Ready"
	I1212 00:34:39.012467  290093 pod_ready.go:86] duration metric: took 5.280222ms for pod "etcd-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.015470  290093 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.019749  290093 pod_ready.go:94] pod "kube-apiserver-no-preload-675290" is "Ready"
	I1212 00:34:39.019769  290093 pod_ready.go:86] duration metric: took 4.25314ms for pod "kube-apiserver-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.022432  290093 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.603008  290093 pod_ready.go:94] pod "kube-controller-manager-no-preload-675290" is "Ready"
	I1212 00:34:39.603053  290093 pod_ready.go:86] duration metric: took 580.604646ms for pod "kube-controller-manager-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:39.801951  290093 pod_ready.go:83] waiting for pod "kube-proxy-7pxpp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:40.202574  290093 pod_ready.go:94] pod "kube-proxy-7pxpp" is "Ready"
	I1212 00:34:40.202612  290093 pod_ready.go:86] duration metric: took 400.60658ms for pod "kube-proxy-7pxpp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:40.402173  290093 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:40.801438  290093 pod_ready.go:94] pod "kube-scheduler-no-preload-675290" is "Ready"
	I1212 00:34:40.801466  290093 pod_ready.go:86] duration metric: took 399.266926ms for pod "kube-scheduler-no-preload-675290" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:34:40.801497  290093 pod_ready.go:40] duration metric: took 16.307864565s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:34:40.857643  290093 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1212 00:34:40.899899  290093 out.go:179] * Done! kubectl is now configured to use "no-preload-675290" cluster and "default" namespace by default
	I1212 00:34:38.070909  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:38.071317  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:34:38.071368  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:38.071437  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:38.100977  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:38.101000  263844 cri.go:89] found id: ""
	I1212 00:34:38.101008  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:34:38.101055  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:38.105578  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:38.105642  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:38.137933  263844 cri.go:89] found id: ""
	I1212 00:34:38.137961  263844 logs.go:282] 0 containers: []
	W1212 00:34:38.137977  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:38.137986  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:38.138051  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:38.172506  263844 cri.go:89] found id: ""
	I1212 00:34:38.172711  263844 logs.go:282] 0 containers: []
	W1212 00:34:38.172783  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:38.172849  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:38.172980  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:38.209393  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:38.209411  263844 cri.go:89] found id: ""
	I1212 00:34:38.209418  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:38.209463  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:38.213539  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:38.213610  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:38.255964  263844 cri.go:89] found id: ""
	I1212 00:34:38.255987  263844 logs.go:282] 0 containers: []
	W1212 00:34:38.255997  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:38.256005  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:38.256070  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:34:38.294229  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:38.294319  263844 cri.go:89] found id: "92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:38.294326  263844 cri.go:89] found id: ""
	I1212 00:34:38.294333  263844 logs.go:282] 2 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2]
	I1212 00:34:38.294395  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:38.299827  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:38.304884  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:34:38.304948  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:34:38.345668  263844 cri.go:89] found id: ""
	I1212 00:34:38.345711  263844 logs.go:282] 0 containers: []
	W1212 00:34:38.345724  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:34:38.345733  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:34:38.345800  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:34:38.383644  263844 cri.go:89] found id: ""
	I1212 00:34:38.383671  263844 logs.go:282] 0 containers: []
	W1212 00:34:38.383683  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:34:38.383703  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:34:38.383716  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:34:38.511578  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:38.511613  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:38.593958  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:38.593982  263844 logs.go:123] Gathering logs for kube-controller-manager [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2] ...
	I1212 00:34:38.593999  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:38.630769  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:38.630799  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:38.711173  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:38.711213  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:38.758424  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:38.758457  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:38.779867  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:38.779897  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:38.821126  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:38.821166  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:38.859790  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:34:38.859830  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:41.397624  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:41.398042  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:34:41.398100  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:41.398171  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:41.433192  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:41.433213  263844 cri.go:89] found id: ""
	I1212 00:34:41.433223  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:34:41.433281  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.437728  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:41.437792  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:41.463618  263844 cri.go:89] found id: ""
	I1212 00:34:41.463643  263844 logs.go:282] 0 containers: []
	W1212 00:34:41.463653  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:41.463660  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:41.463731  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:41.490995  263844 cri.go:89] found id: ""
	I1212 00:34:41.491018  263844 logs.go:282] 0 containers: []
	W1212 00:34:41.491026  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:41.491035  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:41.491093  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:41.518246  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:41.518267  263844 cri.go:89] found id: ""
	I1212 00:34:41.518276  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:41.518332  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.522787  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:41.522849  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:41.549671  263844 cri.go:89] found id: ""
	I1212 00:34:41.549706  263844 logs.go:282] 0 containers: []
	W1212 00:34:41.549716  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:41.549723  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:41.549783  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:34:41.577845  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:41.577868  263844 cri.go:89] found id: "92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:41.577874  263844 cri.go:89] found id: ""
	I1212 00:34:41.577882  263844 logs.go:282] 2 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2]
	I1212 00:34:41.577929  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.581784  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.585354  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:34:41.585419  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:34:41.615221  263844 cri.go:89] found id: ""
	I1212 00:34:41.615254  263844 logs.go:282] 0 containers: []
	W1212 00:34:41.615265  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:34:41.615274  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:34:41.615336  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:34:41.641215  263844 cri.go:89] found id: ""
	I1212 00:34:41.641238  263844 logs.go:282] 0 containers: []
	W1212 00:34:41.641248  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:34:41.641266  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:41.641280  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:41.699142  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:41.699160  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:41.699176  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:41.728077  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:41.728106  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:41.753905  263844 logs.go:123] Gathering logs for kube-controller-manager [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2] ...
	I1212 00:34:41.753927  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:41.778651  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:41.778679  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:41.807391  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:34:41.807414  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:41.831717  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:41.831741  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:41.882004  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:34:41.882028  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:34:41.958828  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:41.958859  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 00:34:39.022400  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:34:41.025312  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	I1212 00:34:44.472376  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:44.472832  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:34:44.472887  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:44.472950  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:44.499664  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:44.499683  263844 cri.go:89] found id: ""
	I1212 00:34:44.499690  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:34:44.499740  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:44.503544  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:44.503613  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:44.530338  263844 cri.go:89] found id: ""
	I1212 00:34:44.530363  263844 logs.go:282] 0 containers: []
	W1212 00:34:44.530373  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:44.530380  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:44.530421  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:44.556031  263844 cri.go:89] found id: ""
	I1212 00:34:44.556054  263844 logs.go:282] 0 containers: []
	W1212 00:34:44.556064  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:44.556071  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:44.556130  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:44.581377  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:44.581397  263844 cri.go:89] found id: ""
	I1212 00:34:44.581406  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:44.581504  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:44.585206  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:44.585254  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:44.609906  263844 cri.go:89] found id: ""
	I1212 00:34:44.609929  263844 logs.go:282] 0 containers: []
	W1212 00:34:44.609937  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:44.609942  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:44.609995  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:34:44.635568  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:44.635590  263844 cri.go:89] found id: "92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:44.635594  263844 cri.go:89] found id: ""
	I1212 00:34:44.635601  263844 logs.go:282] 2 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2]
	I1212 00:34:44.635645  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:44.639406  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:44.642913  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:34:44.642978  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:34:44.667080  263844 cri.go:89] found id: ""
	I1212 00:34:44.667105  263844 logs.go:282] 0 containers: []
	W1212 00:34:44.667114  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:34:44.667120  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:34:44.667166  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:34:44.690884  263844 cri.go:89] found id: ""
	I1212 00:34:44.690908  263844 logs.go:282] 0 containers: []
	W1212 00:34:44.690917  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:34:44.690929  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:44.690940  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:44.741690  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:44.741717  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:44.769952  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:34:44.769978  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:34:44.845857  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:44.845885  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:44.898939  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:44.898959  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:44.898973  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:44.929908  263844 logs.go:123] Gathering logs for kube-controller-manager [92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2] ...
	I1212 00:34:44.929935  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92cdacafd43039eb23b6495439ea6005185a56419ed35ddbeb950949dbea95e2"
	I1212 00:34:44.955084  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:44.955105  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:44.968097  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:44.968119  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:44.992542  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:34:44.992564  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	W1212 00:34:43.533865  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:34:46.022030  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	I1212 00:34:47.517884  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:47.518257  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:34:47.518315  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:47.518370  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:47.546883  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:47.546906  263844 cri.go:89] found id: ""
	I1212 00:34:47.546915  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:34:47.546972  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:47.551348  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:47.551420  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:47.580827  263844 cri.go:89] found id: ""
	I1212 00:34:47.580852  263844 logs.go:282] 0 containers: []
	W1212 00:34:47.580863  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:47.580870  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:47.580932  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:47.609355  263844 cri.go:89] found id: ""
	I1212 00:34:47.609385  263844 logs.go:282] 0 containers: []
	W1212 00:34:47.609397  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:47.609405  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:47.609466  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:47.636649  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:47.636668  263844 cri.go:89] found id: ""
	I1212 00:34:47.636678  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:47.636732  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:47.640406  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:47.640466  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:47.666574  263844 cri.go:89] found id: ""
	I1212 00:34:47.666599  263844 logs.go:282] 0 containers: []
	W1212 00:34:47.666610  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:47.666617  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:47.666664  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:34:47.691610  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:47.691628  263844 cri.go:89] found id: ""
	I1212 00:34:47.691636  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:34:47.691677  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:47.695273  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:34:47.695336  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:34:47.719838  263844 cri.go:89] found id: ""
	I1212 00:34:47.719861  263844 logs.go:282] 0 containers: []
	W1212 00:34:47.719870  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:34:47.719878  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:34:47.719921  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:34:47.744543  263844 cri.go:89] found id: ""
	I1212 00:34:47.744568  263844 logs.go:282] 0 containers: []
	W1212 00:34:47.744578  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:34:47.744589  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:47.744603  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:47.757796  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:47.757819  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:47.810084  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:47.810101  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:47.810113  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:47.838845  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:47.838869  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:47.864697  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:34:47.864720  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:47.888787  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:47.888810  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:47.947388  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:47.947417  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:47.982449  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:34:47.982493  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:34:50.558541  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:50.558892  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:34:50.558948  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:50.559002  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:50.589548  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:50.589569  263844 cri.go:89] found id: ""
	I1212 00:34:50.589578  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:34:50.589637  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:50.593649  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:50.593712  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:50.623145  263844 cri.go:89] found id: ""
	I1212 00:34:50.623180  263844 logs.go:282] 0 containers: []
	W1212 00:34:50.623192  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:50.623199  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:50.623255  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:50.651470  263844 cri.go:89] found id: ""
	I1212 00:34:50.651522  263844 logs.go:282] 0 containers: []
	W1212 00:34:50.651532  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:50.651539  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:50.651591  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:50.677798  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:50.677820  263844 cri.go:89] found id: ""
	I1212 00:34:50.677829  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:50.677883  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:50.681561  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:50.681609  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:50.706028  263844 cri.go:89] found id: ""
	I1212 00:34:50.706049  263844 logs.go:282] 0 containers: []
	W1212 00:34:50.706059  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:50.706066  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:50.706117  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:34:50.731725  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:50.731746  263844 cri.go:89] found id: ""
	I1212 00:34:50.731753  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:34:50.731792  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:50.735244  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:34:50.735292  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:34:50.762370  263844 cri.go:89] found id: ""
	I1212 00:34:50.762387  263844 logs.go:282] 0 containers: []
	W1212 00:34:50.762397  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:34:50.762405  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:34:50.762453  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:34:50.790333  263844 cri.go:89] found id: ""
	I1212 00:34:50.790370  263844 logs.go:282] 0 containers: []
	W1212 00:34:50.790380  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:34:50.790408  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:50.790428  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:50.806634  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:50.806663  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:50.867430  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:50.867447  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:50.867465  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:50.900647  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:50.900674  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:50.927989  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:34:50.928019  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:50.958891  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:50.958914  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:51.019197  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:51.019236  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:51.051612  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:34:51.051641  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 00:34:48.521732  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:34:50.522435  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 12 00:34:37 no-preload-675290 crio[567]: time="2025-12-12T00:34:37.871053333Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:37 no-preload-675290 crio[567]: time="2025-12-12T00:34:37.905204347Z" level=info msg="Created container fc635a362d3ce8418e951572f54c063d0f426b6b9593da951cb6600cd1fcfc3b: kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zdhfk/kubernetes-dashboard" id=a9dd190e-0d75-46d8-9792-d1bcacf1b05b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:37 no-preload-675290 crio[567]: time="2025-12-12T00:34:37.90581844Z" level=info msg="Starting container: fc635a362d3ce8418e951572f54c063d0f426b6b9593da951cb6600cd1fcfc3b" id=2a49ade7-5f1a-4712-a1cc-6db33b2e21fa name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:34:37 no-preload-675290 crio[567]: time="2025-12-12T00:34:37.907629139Z" level=info msg="Started container" PID=1523 containerID=fc635a362d3ce8418e951572f54c063d0f426b6b9593da951cb6600cd1fcfc3b description=kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zdhfk/kubernetes-dashboard id=2a49ade7-5f1a-4712-a1cc-6db33b2e21fa name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e83cbcfcbd7f6b89844b1e17bed487b567a7356f1cae4b45bafed12a286d51b
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.072418912Z" level=info msg="Pulled image: registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=07507ab3-55ba-4e09-ba72-ea372f4b1a4a name=/runtime.v1.ImageService/PullImage
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.073353034Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2a36fff1-76e0-4b3c-825f-aa270879d91a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.077025996Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=519c265a-2a3e-45b6-84b3-e1f4a10d19f6 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.083387546Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9/dashboard-metrics-scraper" id=2996d1de-e9ba-4b14-8aeb-0e4b886a022a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.083533137Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.093148985Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.093935736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.130846759Z" level=info msg="Created container 1cb4f2592f6e608a76423453e145be859f9fd7791ae454ac1acd4872d2caeaa7: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9/dashboard-metrics-scraper" id=2996d1de-e9ba-4b14-8aeb-0e4b886a022a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.131592941Z" level=info msg="Starting container: 1cb4f2592f6e608a76423453e145be859f9fd7791ae454ac1acd4872d2caeaa7" id=9c5f5470-8a66-4c9c-ac61-27860016eb4d name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.133705576Z" level=info msg="Started container" PID=1742 containerID=1cb4f2592f6e608a76423453e145be859f9fd7791ae454ac1acd4872d2caeaa7 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9/dashboard-metrics-scraper id=9c5f5470-8a66-4c9c-ac61-27860016eb4d name=/runtime.v1.RuntimeService/StartContainer sandboxID=a54b5f3af6270d3d107a66e15dfcb6f9f65a12d2b8cd93b265dea1c0c62aea8c
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.175107038Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cb948782-aeef-49a1-bd9a-e37990d4f79d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.178364498Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a0e50224-c7da-4a0b-9194-5a5f95a24f57 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.181993015Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9/dashboard-metrics-scraper" id=711b89b1-5b5a-4065-9295-4ecb63a86312 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.182096102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.191283964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.191921752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.218364138Z" level=info msg="Created container ba47d0e7ab02f71497aa68c85a3a258865d44c5f0904dd855bffcd51ab9bcb6f: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9/dashboard-metrics-scraper" id=711b89b1-5b5a-4065-9295-4ecb63a86312 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.219126336Z" level=info msg="Starting container: ba47d0e7ab02f71497aa68c85a3a258865d44c5f0904dd855bffcd51ab9bcb6f" id=7990899c-c531-4034-baa5-e90bbbc5c3f2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.221747313Z" level=info msg="Started container" PID=1764 containerID=ba47d0e7ab02f71497aa68c85a3a258865d44c5f0904dd855bffcd51ab9bcb6f description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9/dashboard-metrics-scraper id=7990899c-c531-4034-baa5-e90bbbc5c3f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a54b5f3af6270d3d107a66e15dfcb6f9f65a12d2b8cd93b265dea1c0c62aea8c
	Dec 12 00:34:42 no-preload-675290 crio[567]: time="2025-12-12T00:34:42.180346112Z" level=info msg="Removing container: 1cb4f2592f6e608a76423453e145be859f9fd7791ae454ac1acd4872d2caeaa7" id=4cf8d505-7ac3-4eaf-b6e8-2bd3e26a7357 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:34:42 no-preload-675290 crio[567]: time="2025-12-12T00:34:42.190337256Z" level=info msg="Removed container 1cb4f2592f6e608a76423453e145be859f9fd7791ae454ac1acd4872d2caeaa7: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9/dashboard-metrics-scraper" id=4cf8d505-7ac3-4eaf-b6e8-2bd3e26a7357 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	ba47d0e7ab02f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   1                   a54b5f3af6270       dashboard-metrics-scraper-867fb5f87b-7czh9   kubernetes-dashboard
	fc635a362d3ce       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   17 seconds ago      Running             kubernetes-dashboard        0                   1e83cbcfcbd7f       kubernetes-dashboard-b84665fb8-zdhfk         kubernetes-dashboard
	5598f88354534       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           24 seconds ago      Running             busybox                     1                   21969cf52357c       busybox                                      default
	8756cb93787e3       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           24 seconds ago      Running             coredns                     0                   743db983aa9b5       coredns-7d764666f9-44t4m                     kube-system
	0b0225504b360       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           31 seconds ago      Exited              storage-provisioner         0                   4e02d3758dc6e       storage-provisioner                          kube-system
	cc18f21bf8c65       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           31 seconds ago      Running             kube-proxy                  0                   ebb6a57c5e2ef       kube-proxy-7pxpp                             kube-system
	c773979867a38       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           31 seconds ago      Running             kindnet-cni                 0                   a557dbfb6b1a9       kindnet-ng47n                                kube-system
	ac6219e970ded       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           33 seconds ago      Running             kube-scheduler              0                   fbb458c48fa35       kube-scheduler-no-preload-675290             kube-system
	09dd91a035f4e       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           33 seconds ago      Running             kube-controller-manager     0                   5311e18cd05e4       kube-controller-manager-no-preload-675290    kube-system
	b88c273361619       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           33 seconds ago      Running             kube-apiserver              0                   ccb518f9a091e       kube-apiserver-no-preload-675290             kube-system
	3787e40b4cb5a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           33 seconds ago      Running             etcd                        0                   fcf9654e24291       etcd-no-preload-675290                       kube-system
	
	
	==> coredns [8756cb93787e3b94062fff7a5ad449f413a0953cb033edf4e89087b4b35d0ecb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46212 - 16717 "HINFO IN 4914678871883387684.6736985512875188181. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.131455668s
	
	
	==> describe nodes <==
	Name:               no-preload-675290
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-675290
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=no-preload-675290
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_33_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:33:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-675290
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:34:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:34:33 +0000   Fri, 12 Dec 2025 00:33:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:34:33 +0000   Fri, 12 Dec 2025 00:33:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:34:33 +0000   Fri, 12 Dec 2025 00:33:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:34:33 +0000   Fri, 12 Dec 2025 00:34:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-675290
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                bb171fe3-47ef-405d-9d08-6137f609e70c
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 coredns-7d764666f9-44t4m                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     85s
	  kube-system                 etcd-no-preload-675290                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         91s
	  kube-system                 kindnet-ng47n                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      85s
	  kube-system                 kube-apiserver-no-preload-675290              250m (3%)     0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-controller-manager-no-preload-675290     200m (2%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-7pxpp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-no-preload-675290              100m (1%)     0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-7czh9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-zdhfk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  87s   node-controller  Node no-preload-675290 event: Registered Node no-preload-675290 in Controller
	  Normal  RegisteredNode  29s   node-controller  Node no-preload-675290 event: Registered Node no-preload-675290 in Controller
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [3787e40b4cb5a662778290c05ca00d3817fdfbe9b3c1be44d55d057774bc5b3f] <==
	{"level":"warn","ts":"2025-12-12T00:34:22.247115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.254100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.267402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.273313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.281242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.288322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.304077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.311277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.320520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.333783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.338811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.345593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.352576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.359816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.366958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.373571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.379541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.386292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.393299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.399533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.414548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.420700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.427286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.434891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.491141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47206","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:34:55 up  1:17,  0 user,  load average: 5.47, 3.20, 1.96
	Linux no-preload-675290 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c773979867a3857d504f5fe3ee07988f68ca6fef4d7f84d7fafacdb70902f2f8] <==
	I1212 00:34:23.631144       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 00:34:23.631365       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1212 00:34:23.631466       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:34:23.631504       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:34:23.631526       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:34:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:34:23.780558       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:34:23.780919       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:34:23.781107       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 00:34:23.781881       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:34:24.176816       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:34:24.176840       1 metrics.go:72] Registering metrics
	I1212 00:34:24.176947       1 controller.go:711] "Syncing nftables rules"
	I1212 00:34:33.781176       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 00:34:33.781232       1 main.go:301] handling current node
	I1212 00:34:43.781526       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 00:34:43.781556       1 main.go:301] handling current node
	I1212 00:34:53.789554       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 00:34:53.789586       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b88c2733616198eb832ac5f6598b52771ff77dc17d732bbee216a8ceba1085c9] <==
	I1212 00:34:22.984350       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:22.984350       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:22.984381       1 aggregator.go:187] initial CRD sync complete...
	I1212 00:34:22.984390       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 00:34:22.984395       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 00:34:22.984401       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:34:22.984578       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 00:34:22.984586       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 00:34:22.984653       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 00:34:22.992508       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1212 00:34:22.994255       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 00:34:22.996783       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:23.031184       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:34:23.235028       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:34:23.235028       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:34:23.259322       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 00:34:23.294059       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:34:23.317274       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:34:23.322629       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:34:23.356246       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.61.130"}
	I1212 00:34:23.365442       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.172.138"}
	I1212 00:34:23.884630       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1212 00:34:26.599843       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 00:34:26.797827       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:34:26.849533       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [09dd91a035f4ed00c98271aa71b03f50c8302250a3bad1e75601f3f063c96c11] <==
	I1212 00:34:26.152088       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152251       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152258       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152313       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152363       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1212 00:34:26.152405       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152452       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152508       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152529       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152556       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152657       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152680       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152451       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-675290"
	I1212 00:34:26.153522       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.153564       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1212 00:34:26.152849       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.153578       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.153581       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.157497       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:34:26.163113       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.253058       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.253077       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1212 00:34:26.253081       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1212 00:34:26.257738       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:36.155298       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [cc18f21bf8c65e549247cc01742151c3b3c2a3b4c6e9af0b6f20a581191484c1] <==
	I1212 00:34:23.482121       1 server_linux.go:53] "Using iptables proxy"
	I1212 00:34:23.542035       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:34:23.642855       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:23.642898       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1212 00:34:23.642966       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:34:23.660220       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:34:23.660262       1 server_linux.go:136] "Using iptables Proxier"
	I1212 00:34:23.664885       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:34:23.665212       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1212 00:34:23.665237       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:34:23.667221       1 config.go:200] "Starting service config controller"
	I1212 00:34:23.667240       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:34:23.667265       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:34:23.667270       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:34:23.667283       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:34:23.667288       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:34:23.667556       1 config.go:309] "Starting node config controller"
	I1212 00:34:23.667611       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:34:23.767398       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 00:34:23.767405       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 00:34:23.767453       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 00:34:23.767690       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [ac6219e970ded4f9c3fd189c0a0034da3c97b482398366b730cf9355f56749e4] <==
	I1212 00:34:22.203860       1 serving.go:386] Generated self-signed cert in-memory
	W1212 00:34:22.938201       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 00:34:22.938236       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 00:34:22.938249       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 00:34:22.938257       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 00:34:22.976069       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1212 00:34:22.976165       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:34:22.979151       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:34:22.979190       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:34:22.979260       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 00:34:22.979311       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 00:34:23.079507       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 12 00:34:38 no-preload-675290 kubelet[720]: E1212 00:34:38.164683     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zdhfk" containerName="kubernetes-dashboard"
	Dec 12 00:34:38 no-preload-675290 kubelet[720]: E1212 00:34:38.543773     720 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-675290" containerName="kube-scheduler"
	Dec 12 00:34:38 no-preload-675290 kubelet[720]: I1212 00:34:38.566026     720 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zdhfk" podStartSLOduration=8.418419596 podStartE2EDuration="12.56600597s" podCreationTimestamp="2025-12-12 00:34:26 +0000 UTC" firstStartedPulling="2025-12-12 00:34:33.714405929 +0000 UTC m=+12.698800674" lastFinishedPulling="2025-12-12 00:34:37.861992312 +0000 UTC m=+16.846387048" observedRunningTime="2025-12-12 00:34:38.180016239 +0000 UTC m=+17.164410992" watchObservedRunningTime="2025-12-12 00:34:38.56600597 +0000 UTC m=+17.550400723"
	Dec 12 00:34:38 no-preload-675290 kubelet[720]: I1212 00:34:38.688581     720 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness"
	Dec 12 00:34:38 no-preload-675290 kubelet[720]: E1212 00:34:38.688811     720 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-44t4m" containerName="coredns"
	Dec 12 00:34:39 no-preload-675290 kubelet[720]: E1212 00:34:39.167672     720 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-44t4m" containerName="coredns"
	Dec 12 00:34:39 no-preload-675290 kubelet[720]: E1212 00:34:39.167820     720 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-675290" containerName="kube-scheduler"
	Dec 12 00:34:39 no-preload-675290 kubelet[720]: E1212 00:34:39.167954     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zdhfk" containerName="kubernetes-dashboard"
	Dec 12 00:34:39 no-preload-675290 kubelet[720]: E1212 00:34:39.364291     720 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-675290" containerName="kube-controller-manager"
	Dec 12 00:34:41 no-preload-675290 kubelet[720]: E1212 00:34:41.174629     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9" containerName="dashboard-metrics-scraper"
	Dec 12 00:34:41 no-preload-675290 kubelet[720]: I1212 00:34:41.174669     720 scope.go:122] "RemoveContainer" containerID="1cb4f2592f6e608a76423453e145be859f9fd7791ae454ac1acd4872d2caeaa7"
	Dec 12 00:34:42 no-preload-675290 kubelet[720]: I1212 00:34:42.178904     720 scope.go:122] "RemoveContainer" containerID="1cb4f2592f6e608a76423453e145be859f9fd7791ae454ac1acd4872d2caeaa7"
	Dec 12 00:34:42 no-preload-675290 kubelet[720]: E1212 00:34:42.179046     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9" containerName="dashboard-metrics-scraper"
	Dec 12 00:34:42 no-preload-675290 kubelet[720]: I1212 00:34:42.179075     720 scope.go:122] "RemoveContainer" containerID="ba47d0e7ab02f71497aa68c85a3a258865d44c5f0904dd855bffcd51ab9bcb6f"
	Dec 12 00:34:42 no-preload-675290 kubelet[720]: E1212 00:34:42.179330     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7czh9_kubernetes-dashboard(cfb1d889-2aef-4495-a10a-7c80e5910165)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9" podUID="cfb1d889-2aef-4495-a10a-7c80e5910165"
	Dec 12 00:34:43 no-preload-675290 kubelet[720]: E1212 00:34:43.182673     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9" containerName="dashboard-metrics-scraper"
	Dec 12 00:34:43 no-preload-675290 kubelet[720]: I1212 00:34:43.182698     720 scope.go:122] "RemoveContainer" containerID="ba47d0e7ab02f71497aa68c85a3a258865d44c5f0904dd855bffcd51ab9bcb6f"
	Dec 12 00:34:43 no-preload-675290 kubelet[720]: E1212 00:34:43.182825     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7czh9_kubernetes-dashboard(cfb1d889-2aef-4495-a10a-7c80e5910165)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9" podUID="cfb1d889-2aef-4495-a10a-7c80e5910165"
	Dec 12 00:34:44 no-preload-675290 kubelet[720]: E1212 00:34:44.184681     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9" containerName="dashboard-metrics-scraper"
	Dec 12 00:34:44 no-preload-675290 kubelet[720]: I1212 00:34:44.184711     720 scope.go:122] "RemoveContainer" containerID="ba47d0e7ab02f71497aa68c85a3a258865d44c5f0904dd855bffcd51ab9bcb6f"
	Dec 12 00:34:44 no-preload-675290 kubelet[720]: E1212 00:34:44.184865     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7czh9_kubernetes-dashboard(cfb1d889-2aef-4495-a10a-7c80e5910165)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9" podUID="cfb1d889-2aef-4495-a10a-7c80e5910165"
	Dec 12 00:34:53 no-preload-675290 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 00:34:53 no-preload-675290 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 00:34:53 no-preload-675290 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:34:53 no-preload-675290 systemd[1]: kubelet.service: Consumed 1.168s CPU time.
	
	
	==> kubernetes-dashboard [fc635a362d3ce8418e951572f54c063d0f426b6b9593da951cb6600cd1fcfc3b] <==
	2025/12/12 00:34:37 Using namespace: kubernetes-dashboard
	2025/12/12 00:34:37 Using in-cluster config to connect to apiserver
	2025/12/12 00:34:37 Using secret token for csrf signing
	2025/12/12 00:34:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/12 00:34:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/12 00:34:37 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/12 00:34:37 Generating JWE encryption key
	2025/12/12 00:34:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/12 00:34:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/12 00:34:38 Initializing JWE encryption key from synchronized object
	2025/12/12 00:34:38 Creating in-cluster Sidecar client
	2025/12/12 00:34:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 00:34:38 Serving insecurely on HTTP port: 9090
	2025/12/12 00:34:37 Starting overwatch
	
	
	==> storage-provisioner [0b0225504b360347799aa89a639fd044dab1a6bb89d5ff5364dfdd123cb5e696] <==
	I1212 00:34:23.454196       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 00:34:53.457908       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-675290 -n no-preload-675290
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-675290 -n no-preload-675290: exit status 2 (352.404091ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-675290 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-675290
helpers_test.go:244: (dbg) docker inspect no-preload-675290:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d",
	        "Created": "2025-12-12T00:32:58.309247922Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 290282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:34:15.207974428Z",
	            "FinishedAt": "2025-12-12T00:34:14.332069782Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d/hostname",
	        "HostsPath": "/var/lib/docker/containers/822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d/hosts",
	        "LogPath": "/var/lib/docker/containers/822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d/822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d-json.log",
	        "Name": "/no-preload-675290",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-675290:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-675290",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "822239fdcf287d6ba739ba85c2d6f29f7deba63bf573fd42822ba005a57e489d",
	                "LowerDir": "/var/lib/docker/overlay2/526e2f5333a3d044d97b913738c91ecc06a6a6ea28306358c63bf56b192a3e75-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/526e2f5333a3d044d97b913738c91ecc06a6a6ea28306358c63bf56b192a3e75/merged",
	                "UpperDir": "/var/lib/docker/overlay2/526e2f5333a3d044d97b913738c91ecc06a6a6ea28306358c63bf56b192a3e75/diff",
	                "WorkDir": "/var/lib/docker/overlay2/526e2f5333a3d044d97b913738c91ecc06a6a6ea28306358c63bf56b192a3e75/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-675290",
	                "Source": "/var/lib/docker/volumes/no-preload-675290/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-675290",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-675290",
	                "name.minikube.sigs.k8s.io": "no-preload-675290",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3cd2855e8cdd58bb1334faa202cfce0327a38e4c8320d89e185fe6fc185ef2dc",
	            "SandboxKey": "/var/run/docker/netns/3cd2855e8cdd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-675290": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f766d8223619c67c6480629ff7786cf2f3559f1e416095164a10f67db0a3ed9d",
	                    "EndpointID": "3e662c4ba30a20818a28363ffbd3f245f825502a74dcff3f408e0835ce620338",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "72:8b:6b:43:3e:ce",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-675290",
	                        "822239fdcf28"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675290 -n no-preload-675290
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675290 -n no-preload-675290: exit status 2 (362.545315ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-675290 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-675290 logs -n 25: (1.136407206s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:32 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ cert-options-319518 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-319518          │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ -p cert-options-319518 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-319518          │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ delete  │ -p cert-options-319518                                                                                                                                                                                                                        │ cert-options-319518          │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-743506 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ stop    │ -p old-k8s-version-743506 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable metrics-server -p no-preload-675290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ stop    │ -p no-preload-675290 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-858659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ stop    │ -p embed-certs-858659 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-743506 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p no-preload-675290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-858659 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ image   │ old-k8s-version-743506 image list --format=json                                                                                                                                                                                               │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p old-k8s-version-743506 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p old-k8s-version-743506                                                                                                                                                                                                                     │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ image   │ no-preload-675290 image list --format=json                                                                                                                                                                                                    │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p no-preload-675290 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p old-k8s-version-743506                                                                                                                                                                                                                     │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ delete  │ -p disable-driver-mounts-039387                                                                                                                                                                                                               │ disable-driver-mounts-039387 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p default-k8s-diff-port-079970 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-079970 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:34:55
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:34:55.710349  300250 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:34:55.710680  300250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:34:55.710690  300250 out.go:374] Setting ErrFile to fd 2...
	I1212 00:34:55.710695  300250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:34:55.710888  300250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:34:55.711365  300250 out.go:368] Setting JSON to false
	I1212 00:34:55.712618  300250 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4642,"bootTime":1765495054,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:34:55.712694  300250 start.go:143] virtualization: kvm guest
	I1212 00:34:55.715153  300250 out.go:179] * [default-k8s-diff-port-079970] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:34:55.716504  300250 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:34:55.716535  300250 notify.go:221] Checking for updates...
	I1212 00:34:55.718986  300250 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:34:55.722431  300250 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:34:55.725728  300250 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:34:55.726908  300250 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:34:55.728106  300250 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:34:55.729647  300250 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:34:55.729771  300250 config.go:182] Loaded profile config "kubernetes-upgrade-605797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:34:55.729860  300250 config.go:182] Loaded profile config "no-preload-675290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:34:55.729958  300250 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:34:55.759374  300250 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:34:55.759515  300250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:34:55.824466  300250 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-12 00:34:55.813797172 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:34:55.824581  300250 docker.go:319] overlay module found
	I1212 00:34:55.830611  300250 out.go:179] * Using the docker driver based on user configuration
	I1212 00:34:55.832028  300250 start.go:309] selected driver: docker
	I1212 00:34:55.832047  300250 start.go:927] validating driver "docker" against <nil>
	I1212 00:34:55.832061  300250 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:34:55.832615  300250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:34:55.893986  300250 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-12 00:34:55.884579773 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:34:55.894124  300250 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 00:34:55.894423  300250 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:34:55.896317  300250 out.go:179] * Using Docker driver with root privileges
	I1212 00:34:55.898010  300250 cni.go:84] Creating CNI manager for ""
	I1212 00:34:55.898091  300250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:34:55.898102  300250 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:34:55.898174  300250 start.go:353] cluster config:
	{Name:default-k8s-diff-port-079970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:34:55.900255  300250 out.go:179] * Starting "default-k8s-diff-port-079970" primary control-plane node in "default-k8s-diff-port-079970" cluster
	I1212 00:34:55.901495  300250 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:34:55.902774  300250 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:34:55.904111  300250 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:34:55.904145  300250 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 00:34:55.904154  300250 cache.go:65] Caching tarball of preloaded images
	I1212 00:34:55.904211  300250 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:34:55.904262  300250 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:34:55.904277  300250 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 00:34:55.904408  300250 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/config.json ...
	I1212 00:34:55.904438  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/config.json: {Name:mkc36bbe529d1fc87c5d9d731949bf56bf48d515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:55.925635  300250 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:34:55.925656  300250 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:34:55.925671  300250 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:34:55.925703  300250 start.go:360] acquireMachinesLock for default-k8s-diff-port-079970: {Name:mkb0fee4ba0a09cdea3ab1cb24b98ef1e83d9857 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:34:55.925822  300250 start.go:364] duration metric: took 87.93µs to acquireMachinesLock for "default-k8s-diff-port-079970"
	I1212 00:34:55.925852  300250 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-079970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079970 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:34:55.925929  300250 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 12 00:34:37 no-preload-675290 crio[567]: time="2025-12-12T00:34:37.871053333Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:37 no-preload-675290 crio[567]: time="2025-12-12T00:34:37.905204347Z" level=info msg="Created container fc635a362d3ce8418e951572f54c063d0f426b6b9593da951cb6600cd1fcfc3b: kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zdhfk/kubernetes-dashboard" id=a9dd190e-0d75-46d8-9792-d1bcacf1b05b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:37 no-preload-675290 crio[567]: time="2025-12-12T00:34:37.90581844Z" level=info msg="Starting container: fc635a362d3ce8418e951572f54c063d0f426b6b9593da951cb6600cd1fcfc3b" id=2a49ade7-5f1a-4712-a1cc-6db33b2e21fa name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:34:37 no-preload-675290 crio[567]: time="2025-12-12T00:34:37.907629139Z" level=info msg="Started container" PID=1523 containerID=fc635a362d3ce8418e951572f54c063d0f426b6b9593da951cb6600cd1fcfc3b description=kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zdhfk/kubernetes-dashboard id=2a49ade7-5f1a-4712-a1cc-6db33b2e21fa name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e83cbcfcbd7f6b89844b1e17bed487b567a7356f1cae4b45bafed12a286d51b
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.072418912Z" level=info msg="Pulled image: registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=07507ab3-55ba-4e09-ba72-ea372f4b1a4a name=/runtime.v1.ImageService/PullImage
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.073353034Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2a36fff1-76e0-4b3c-825f-aa270879d91a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.077025996Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=519c265a-2a3e-45b6-84b3-e1f4a10d19f6 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.083387546Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9/dashboard-metrics-scraper" id=2996d1de-e9ba-4b14-8aeb-0e4b886a022a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.083533137Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.093148985Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.093935736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.130846759Z" level=info msg="Created container 1cb4f2592f6e608a76423453e145be859f9fd7791ae454ac1acd4872d2caeaa7: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9/dashboard-metrics-scraper" id=2996d1de-e9ba-4b14-8aeb-0e4b886a022a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.131592941Z" level=info msg="Starting container: 1cb4f2592f6e608a76423453e145be859f9fd7791ae454ac1acd4872d2caeaa7" id=9c5f5470-8a66-4c9c-ac61-27860016eb4d name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.133705576Z" level=info msg="Started container" PID=1742 containerID=1cb4f2592f6e608a76423453e145be859f9fd7791ae454ac1acd4872d2caeaa7 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9/dashboard-metrics-scraper id=9c5f5470-8a66-4c9c-ac61-27860016eb4d name=/runtime.v1.RuntimeService/StartContainer sandboxID=a54b5f3af6270d3d107a66e15dfcb6f9f65a12d2b8cd93b265dea1c0c62aea8c
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.175107038Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cb948782-aeef-49a1-bd9a-e37990d4f79d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.178364498Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a0e50224-c7da-4a0b-9194-5a5f95a24f57 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.181993015Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9/dashboard-metrics-scraper" id=711b89b1-5b5a-4065-9295-4ecb63a86312 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.182096102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.191283964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.191921752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.218364138Z" level=info msg="Created container ba47d0e7ab02f71497aa68c85a3a258865d44c5f0904dd855bffcd51ab9bcb6f: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9/dashboard-metrics-scraper" id=711b89b1-5b5a-4065-9295-4ecb63a86312 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.219126336Z" level=info msg="Starting container: ba47d0e7ab02f71497aa68c85a3a258865d44c5f0904dd855bffcd51ab9bcb6f" id=7990899c-c531-4034-baa5-e90bbbc5c3f2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:34:41 no-preload-675290 crio[567]: time="2025-12-12T00:34:41.221747313Z" level=info msg="Started container" PID=1764 containerID=ba47d0e7ab02f71497aa68c85a3a258865d44c5f0904dd855bffcd51ab9bcb6f description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9/dashboard-metrics-scraper id=7990899c-c531-4034-baa5-e90bbbc5c3f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a54b5f3af6270d3d107a66e15dfcb6f9f65a12d2b8cd93b265dea1c0c62aea8c
	Dec 12 00:34:42 no-preload-675290 crio[567]: time="2025-12-12T00:34:42.180346112Z" level=info msg="Removing container: 1cb4f2592f6e608a76423453e145be859f9fd7791ae454ac1acd4872d2caeaa7" id=4cf8d505-7ac3-4eaf-b6e8-2bd3e26a7357 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:34:42 no-preload-675290 crio[567]: time="2025-12-12T00:34:42.190337256Z" level=info msg="Removed container 1cb4f2592f6e608a76423453e145be859f9fd7791ae454ac1acd4872d2caeaa7: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9/dashboard-metrics-scraper" id=4cf8d505-7ac3-4eaf-b6e8-2bd3e26a7357 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	ba47d0e7ab02f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   1                   a54b5f3af6270       dashboard-metrics-scraper-867fb5f87b-7czh9   kubernetes-dashboard
	fc635a362d3ce       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   19 seconds ago      Running             kubernetes-dashboard        0                   1e83cbcfcbd7f       kubernetes-dashboard-b84665fb8-zdhfk         kubernetes-dashboard
	5598f88354534       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           26 seconds ago      Running             busybox                     1                   21969cf52357c       busybox                                      default
	8756cb93787e3       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           26 seconds ago      Running             coredns                     0                   743db983aa9b5       coredns-7d764666f9-44t4m                     kube-system
	0b0225504b360       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           33 seconds ago      Exited              storage-provisioner         0                   4e02d3758dc6e       storage-provisioner                          kube-system
	cc18f21bf8c65       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           33 seconds ago      Running             kube-proxy                  0                   ebb6a57c5e2ef       kube-proxy-7pxpp                             kube-system
	c773979867a38       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           33 seconds ago      Running             kindnet-cni                 0                   a557dbfb6b1a9       kindnet-ng47n                                kube-system
	ac6219e970ded       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           35 seconds ago      Running             kube-scheduler              0                   fbb458c48fa35       kube-scheduler-no-preload-675290             kube-system
	09dd91a035f4e       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           35 seconds ago      Running             kube-controller-manager     0                   5311e18cd05e4       kube-controller-manager-no-preload-675290    kube-system
	b88c273361619       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           35 seconds ago      Running             kube-apiserver              0                   ccb518f9a091e       kube-apiserver-no-preload-675290             kube-system
	3787e40b4cb5a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           35 seconds ago      Running             etcd                        0                   fcf9654e24291       etcd-no-preload-675290                       kube-system
	
	
	==> coredns [8756cb93787e3b94062fff7a5ad449f413a0953cb033edf4e89087b4b35d0ecb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46212 - 16717 "HINFO IN 4914678871883387684.6736985512875188181. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.131455668s
	
	
	==> describe nodes <==
	Name:               no-preload-675290
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-675290
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=no-preload-675290
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_33_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:33:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-675290
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:34:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:34:33 +0000   Fri, 12 Dec 2025 00:33:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:34:33 +0000   Fri, 12 Dec 2025 00:33:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:34:33 +0000   Fri, 12 Dec 2025 00:33:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:34:33 +0000   Fri, 12 Dec 2025 00:34:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-675290
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                bb171fe3-47ef-405d-9d08-6137f609e70c
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 coredns-7d764666f9-44t4m                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     87s
	  kube-system                 etcd-no-preload-675290                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         93s
	  kube-system                 kindnet-ng47n                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      87s
	  kube-system                 kube-apiserver-no-preload-675290              250m (3%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-controller-manager-no-preload-675290     200m (2%)     0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-proxy-7pxpp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-scheduler-no-preload-675290              100m (1%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-7czh9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-zdhfk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  89s   node-controller  Node no-preload-675290 event: Registered Node no-preload-675290 in Controller
	  Normal  RegisteredNode  31s   node-controller  Node no-preload-675290 event: Registered Node no-preload-675290 in Controller
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [3787e40b4cb5a662778290c05ca00d3817fdfbe9b3c1be44d55d057774bc5b3f] <==
	{"level":"warn","ts":"2025-12-12T00:34:22.247115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.254100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.267402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.273313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.281242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.288322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.304077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.311277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.320520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.333783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.338811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.345593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.352576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.359816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.366958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.373571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.379541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.386292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.393299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.399533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.414548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.420700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.427286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.434891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:22.491141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47206","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:34:57 up  1:17,  0 user,  load average: 6.31, 3.41, 2.04
	Linux no-preload-675290 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c773979867a3857d504f5fe3ee07988f68ca6fef4d7f84d7fafacdb70902f2f8] <==
	I1212 00:34:23.631144       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 00:34:23.631365       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1212 00:34:23.631466       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:34:23.631504       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:34:23.631526       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:34:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:34:23.780558       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:34:23.780919       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:34:23.781107       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 00:34:23.781881       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:34:24.176816       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:34:24.176840       1 metrics.go:72] Registering metrics
	I1212 00:34:24.176947       1 controller.go:711] "Syncing nftables rules"
	I1212 00:34:33.781176       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 00:34:33.781232       1 main.go:301] handling current node
	I1212 00:34:43.781526       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 00:34:43.781556       1 main.go:301] handling current node
	I1212 00:34:53.789554       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 00:34:53.789586       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b88c2733616198eb832ac5f6598b52771ff77dc17d732bbee216a8ceba1085c9] <==
	I1212 00:34:22.984350       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:22.984350       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:22.984381       1 aggregator.go:187] initial CRD sync complete...
	I1212 00:34:22.984390       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 00:34:22.984395       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 00:34:22.984401       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:34:22.984578       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 00:34:22.984586       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 00:34:22.984653       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 00:34:22.992508       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1212 00:34:22.994255       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 00:34:22.996783       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:23.031184       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:34:23.235028       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:34:23.235028       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:34:23.259322       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 00:34:23.294059       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:34:23.317274       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:34:23.322629       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:34:23.356246       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.61.130"}
	I1212 00:34:23.365442       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.172.138"}
	I1212 00:34:23.884630       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1212 00:34:26.599843       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 00:34:26.797827       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:34:26.849533       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [09dd91a035f4ed00c98271aa71b03f50c8302250a3bad1e75601f3f063c96c11] <==
	I1212 00:34:26.152088       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152251       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152258       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152313       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152363       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1212 00:34:26.152405       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152452       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152508       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152529       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152556       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152657       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152680       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.152451       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-675290"
	I1212 00:34:26.153522       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.153564       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1212 00:34:26.152849       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.153578       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.153581       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.157497       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:34:26.163113       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.253058       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:26.253077       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1212 00:34:26.253081       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1212 00:34:26.257738       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:36.155298       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [cc18f21bf8c65e549247cc01742151c3b3c2a3b4c6e9af0b6f20a581191484c1] <==
	I1212 00:34:23.482121       1 server_linux.go:53] "Using iptables proxy"
	I1212 00:34:23.542035       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:34:23.642855       1 shared_informer.go:377] "Caches are synced"
	I1212 00:34:23.642898       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1212 00:34:23.642966       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:34:23.660220       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:34:23.660262       1 server_linux.go:136] "Using iptables Proxier"
	I1212 00:34:23.664885       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:34:23.665212       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1212 00:34:23.665237       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:34:23.667221       1 config.go:200] "Starting service config controller"
	I1212 00:34:23.667240       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:34:23.667265       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:34:23.667270       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:34:23.667283       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:34:23.667288       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:34:23.667556       1 config.go:309] "Starting node config controller"
	I1212 00:34:23.667611       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:34:23.767398       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 00:34:23.767405       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 00:34:23.767453       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 00:34:23.767690       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [ac6219e970ded4f9c3fd189c0a0034da3c97b482398366b730cf9355f56749e4] <==
	I1212 00:34:22.203860       1 serving.go:386] Generated self-signed cert in-memory
	W1212 00:34:22.938201       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 00:34:22.938236       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 00:34:22.938249       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 00:34:22.938257       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 00:34:22.976069       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1212 00:34:22.976165       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:34:22.979151       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:34:22.979190       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:34:22.979260       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 00:34:22.979311       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 00:34:23.079507       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 12 00:34:38 no-preload-675290 kubelet[720]: E1212 00:34:38.164683     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zdhfk" containerName="kubernetes-dashboard"
	Dec 12 00:34:38 no-preload-675290 kubelet[720]: E1212 00:34:38.543773     720 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-675290" containerName="kube-scheduler"
	Dec 12 00:34:38 no-preload-675290 kubelet[720]: I1212 00:34:38.566026     720 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zdhfk" podStartSLOduration=8.418419596 podStartE2EDuration="12.56600597s" podCreationTimestamp="2025-12-12 00:34:26 +0000 UTC" firstStartedPulling="2025-12-12 00:34:33.714405929 +0000 UTC m=+12.698800674" lastFinishedPulling="2025-12-12 00:34:37.861992312 +0000 UTC m=+16.846387048" observedRunningTime="2025-12-12 00:34:38.180016239 +0000 UTC m=+17.164410992" watchObservedRunningTime="2025-12-12 00:34:38.56600597 +0000 UTC m=+17.550400723"
	Dec 12 00:34:38 no-preload-675290 kubelet[720]: I1212 00:34:38.688581     720 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness"
	Dec 12 00:34:38 no-preload-675290 kubelet[720]: E1212 00:34:38.688811     720 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-44t4m" containerName="coredns"
	Dec 12 00:34:39 no-preload-675290 kubelet[720]: E1212 00:34:39.167672     720 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-44t4m" containerName="coredns"
	Dec 12 00:34:39 no-preload-675290 kubelet[720]: E1212 00:34:39.167820     720 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-675290" containerName="kube-scheduler"
	Dec 12 00:34:39 no-preload-675290 kubelet[720]: E1212 00:34:39.167954     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zdhfk" containerName="kubernetes-dashboard"
	Dec 12 00:34:39 no-preload-675290 kubelet[720]: E1212 00:34:39.364291     720 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-675290" containerName="kube-controller-manager"
	Dec 12 00:34:41 no-preload-675290 kubelet[720]: E1212 00:34:41.174629     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9" containerName="dashboard-metrics-scraper"
	Dec 12 00:34:41 no-preload-675290 kubelet[720]: I1212 00:34:41.174669     720 scope.go:122] "RemoveContainer" containerID="1cb4f2592f6e608a76423453e145be859f9fd7791ae454ac1acd4872d2caeaa7"
	Dec 12 00:34:42 no-preload-675290 kubelet[720]: I1212 00:34:42.178904     720 scope.go:122] "RemoveContainer" containerID="1cb4f2592f6e608a76423453e145be859f9fd7791ae454ac1acd4872d2caeaa7"
	Dec 12 00:34:42 no-preload-675290 kubelet[720]: E1212 00:34:42.179046     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9" containerName="dashboard-metrics-scraper"
	Dec 12 00:34:42 no-preload-675290 kubelet[720]: I1212 00:34:42.179075     720 scope.go:122] "RemoveContainer" containerID="ba47d0e7ab02f71497aa68c85a3a258865d44c5f0904dd855bffcd51ab9bcb6f"
	Dec 12 00:34:42 no-preload-675290 kubelet[720]: E1212 00:34:42.179330     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7czh9_kubernetes-dashboard(cfb1d889-2aef-4495-a10a-7c80e5910165)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9" podUID="cfb1d889-2aef-4495-a10a-7c80e5910165"
	Dec 12 00:34:43 no-preload-675290 kubelet[720]: E1212 00:34:43.182673     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9" containerName="dashboard-metrics-scraper"
	Dec 12 00:34:43 no-preload-675290 kubelet[720]: I1212 00:34:43.182698     720 scope.go:122] "RemoveContainer" containerID="ba47d0e7ab02f71497aa68c85a3a258865d44c5f0904dd855bffcd51ab9bcb6f"
	Dec 12 00:34:43 no-preload-675290 kubelet[720]: E1212 00:34:43.182825     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7czh9_kubernetes-dashboard(cfb1d889-2aef-4495-a10a-7c80e5910165)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9" podUID="cfb1d889-2aef-4495-a10a-7c80e5910165"
	Dec 12 00:34:44 no-preload-675290 kubelet[720]: E1212 00:34:44.184681     720 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9" containerName="dashboard-metrics-scraper"
	Dec 12 00:34:44 no-preload-675290 kubelet[720]: I1212 00:34:44.184711     720 scope.go:122] "RemoveContainer" containerID="ba47d0e7ab02f71497aa68c85a3a258865d44c5f0904dd855bffcd51ab9bcb6f"
	Dec 12 00:34:44 no-preload-675290 kubelet[720]: E1212 00:34:44.184865     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7czh9_kubernetes-dashboard(cfb1d889-2aef-4495-a10a-7c80e5910165)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7czh9" podUID="cfb1d889-2aef-4495-a10a-7c80e5910165"
	Dec 12 00:34:53 no-preload-675290 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 00:34:53 no-preload-675290 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 00:34:53 no-preload-675290 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:34:53 no-preload-675290 systemd[1]: kubelet.service: Consumed 1.168s CPU time.
	
	
	==> kubernetes-dashboard [fc635a362d3ce8418e951572f54c063d0f426b6b9593da951cb6600cd1fcfc3b] <==
	2025/12/12 00:34:37 Using namespace: kubernetes-dashboard
	2025/12/12 00:34:37 Using in-cluster config to connect to apiserver
	2025/12/12 00:34:37 Using secret token for csrf signing
	2025/12/12 00:34:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/12 00:34:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/12 00:34:37 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/12 00:34:37 Generating JWE encryption key
	2025/12/12 00:34:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/12 00:34:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/12 00:34:38 Initializing JWE encryption key from synchronized object
	2025/12/12 00:34:38 Creating in-cluster Sidecar client
	2025/12/12 00:34:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 00:34:38 Serving insecurely on HTTP port: 9090
	2025/12/12 00:34:37 Starting overwatch
	
	
	==> storage-provisioner [0b0225504b360347799aa89a639fd044dab1a6bb89d5ff5364dfdd123cb5e696] <==
	I1212 00:34:23.454196       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 00:34:53.457908       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-675290 -n no-preload-675290
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-675290 -n no-preload-675290: exit status 2 (344.350257ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-675290 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-858659 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-858659 --alsologtostderr -v=1: exit status 80 (2.174643954s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-858659 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:35:20.181196  307671 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:20.181495  307671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:20.181505  307671 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:20.181510  307671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:20.181766  307671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:35:20.182061  307671 out.go:368] Setting JSON to false
	I1212 00:35:20.182089  307671 mustload.go:66] Loading cluster: embed-certs-858659
	I1212 00:35:20.182582  307671 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:35:20.182983  307671 cli_runner.go:164] Run: docker container inspect embed-certs-858659 --format={{.State.Status}}
	I1212 00:35:20.202664  307671 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:35:20.202981  307671 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:20.259689  307671 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-12 00:35:20.249694385 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:35:20.260283  307671 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-858659 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1212 00:35:20.262080  307671 out.go:179] * Pausing node embed-certs-858659 ... 
	I1212 00:35:20.263206  307671 host.go:66] Checking if "embed-certs-858659" exists ...
	I1212 00:35:20.263452  307671 ssh_runner.go:195] Run: systemctl --version
	I1212 00:35:20.263503  307671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-858659
	I1212 00:35:20.281989  307671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/embed-certs-858659/id_rsa Username:docker}
	I1212 00:35:20.378981  307671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:35:20.390436  307671 pause.go:52] kubelet running: true
	I1212 00:35:20.390518  307671 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:35:20.548232  307671 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:35:20.548347  307671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:35:20.624316  307671 cri.go:89] found id: "5d164991478d239f54b5a6f53f3bcbddadcbc4baf2cd934eea9b27ca5f2ea741"
	I1212 00:35:20.624339  307671 cri.go:89] found id: "7c809ac147187b6cddcd652c01f5ca9264456d90ed5b50ad626d6a454686cea5"
	I1212 00:35:20.624346  307671 cri.go:89] found id: "3c9dcfc0a39b026ebae759fa5108c0241e5b83a83111d1a9c78bde6446593eed"
	I1212 00:35:20.624350  307671 cri.go:89] found id: "06a70ae8e015e6ed73dee2a76a938f3ac6d6569ec98202ecea145fbd5fdd2e6e"
	I1212 00:35:20.624354  307671 cri.go:89] found id: "0427b9f7a3afccc4063d5b86ecc7f448e0192c4c5812428503d97bac33bfe9cd"
	I1212 00:35:20.624358  307671 cri.go:89] found id: "6eb73517ebee81333693231ad0296e35a758eeb5aa9cc45c7ee7663ea6e652e4"
	I1212 00:35:20.624363  307671 cri.go:89] found id: "07f4a35d8d4d1bc19d0c0dc6b015f381bc482dc379a9a416e57528498aa8e1e5"
	I1212 00:35:20.624375  307671 cri.go:89] found id: "a4dfa21dd3b082f3a9464428da14520464aa97d1551c9a8ddffbf56d063877a6"
	I1212 00:35:20.624380  307671 cri.go:89] found id: "3da8c9c634a0528028b8a0875544b6a0c41e72dc8bf1ff1da95beccf80094376"
	I1212 00:35:20.624392  307671 cri.go:89] found id: "7d19b30263df4363fbd74e18b1a88cb116c9d902be0ebb3b0098d7308ab71d50"
	I1212 00:35:20.624401  307671 cri.go:89] found id: "70c68a457925502a9aee2ba9aecc60dbf1e189971126f7728a2e5d3dad2af8c7"
	I1212 00:35:20.624407  307671 cri.go:89] found id: ""
	I1212 00:35:20.624451  307671 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:35:20.636339  307671 retry.go:31] will retry after 128.733345ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:35:20Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:35:20.765709  307671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:35:20.779751  307671 pause.go:52] kubelet running: false
	I1212 00:35:20.779805  307671 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:35:20.921189  307671 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:35:20.921263  307671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:35:20.985063  307671 cri.go:89] found id: "5d164991478d239f54b5a6f53f3bcbddadcbc4baf2cd934eea9b27ca5f2ea741"
	I1212 00:35:20.985085  307671 cri.go:89] found id: "7c809ac147187b6cddcd652c01f5ca9264456d90ed5b50ad626d6a454686cea5"
	I1212 00:35:20.985090  307671 cri.go:89] found id: "3c9dcfc0a39b026ebae759fa5108c0241e5b83a83111d1a9c78bde6446593eed"
	I1212 00:35:20.985093  307671 cri.go:89] found id: "06a70ae8e015e6ed73dee2a76a938f3ac6d6569ec98202ecea145fbd5fdd2e6e"
	I1212 00:35:20.985096  307671 cri.go:89] found id: "0427b9f7a3afccc4063d5b86ecc7f448e0192c4c5812428503d97bac33bfe9cd"
	I1212 00:35:20.985099  307671 cri.go:89] found id: "6eb73517ebee81333693231ad0296e35a758eeb5aa9cc45c7ee7663ea6e652e4"
	I1212 00:35:20.985102  307671 cri.go:89] found id: "07f4a35d8d4d1bc19d0c0dc6b015f381bc482dc379a9a416e57528498aa8e1e5"
	I1212 00:35:20.985105  307671 cri.go:89] found id: "a4dfa21dd3b082f3a9464428da14520464aa97d1551c9a8ddffbf56d063877a6"
	I1212 00:35:20.985107  307671 cri.go:89] found id: "3da8c9c634a0528028b8a0875544b6a0c41e72dc8bf1ff1da95beccf80094376"
	I1212 00:35:20.985124  307671 cri.go:89] found id: "7d19b30263df4363fbd74e18b1a88cb116c9d902be0ebb3b0098d7308ab71d50"
	I1212 00:35:20.985130  307671 cri.go:89] found id: "70c68a457925502a9aee2ba9aecc60dbf1e189971126f7728a2e5d3dad2af8c7"
	I1212 00:35:20.985133  307671 cri.go:89] found id: ""
	I1212 00:35:20.985169  307671 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:35:20.996792  307671 retry.go:31] will retry after 227.41107ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:35:20Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:35:21.225264  307671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:35:21.238115  307671 pause.go:52] kubelet running: false
	I1212 00:35:21.238171  307671 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:35:21.377762  307671 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:35:21.377823  307671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:35:21.442809  307671 cri.go:89] found id: "5d164991478d239f54b5a6f53f3bcbddadcbc4baf2cd934eea9b27ca5f2ea741"
	I1212 00:35:21.442831  307671 cri.go:89] found id: "7c809ac147187b6cddcd652c01f5ca9264456d90ed5b50ad626d6a454686cea5"
	I1212 00:35:21.442835  307671 cri.go:89] found id: "3c9dcfc0a39b026ebae759fa5108c0241e5b83a83111d1a9c78bde6446593eed"
	I1212 00:35:21.442840  307671 cri.go:89] found id: "06a70ae8e015e6ed73dee2a76a938f3ac6d6569ec98202ecea145fbd5fdd2e6e"
	I1212 00:35:21.442844  307671 cri.go:89] found id: "0427b9f7a3afccc4063d5b86ecc7f448e0192c4c5812428503d97bac33bfe9cd"
	I1212 00:35:21.442850  307671 cri.go:89] found id: "6eb73517ebee81333693231ad0296e35a758eeb5aa9cc45c7ee7663ea6e652e4"
	I1212 00:35:21.442854  307671 cri.go:89] found id: "07f4a35d8d4d1bc19d0c0dc6b015f381bc482dc379a9a416e57528498aa8e1e5"
	I1212 00:35:21.442859  307671 cri.go:89] found id: "a4dfa21dd3b082f3a9464428da14520464aa97d1551c9a8ddffbf56d063877a6"
	I1212 00:35:21.442864  307671 cri.go:89] found id: "3da8c9c634a0528028b8a0875544b6a0c41e72dc8bf1ff1da95beccf80094376"
	I1212 00:35:21.442872  307671 cri.go:89] found id: "7d19b30263df4363fbd74e18b1a88cb116c9d902be0ebb3b0098d7308ab71d50"
	I1212 00:35:21.442880  307671 cri.go:89] found id: "70c68a457925502a9aee2ba9aecc60dbf1e189971126f7728a2e5d3dad2af8c7"
	I1212 00:35:21.442885  307671 cri.go:89] found id: ""
	I1212 00:35:21.442931  307671 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:35:21.454549  307671 retry.go:31] will retry after 584.355535ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:35:21Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:35:22.039291  307671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:35:22.053304  307671 pause.go:52] kubelet running: false
	I1212 00:35:22.053361  307671 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:35:22.208186  307671 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:35:22.208282  307671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:35:22.273358  307671 cri.go:89] found id: "5d164991478d239f54b5a6f53f3bcbddadcbc4baf2cd934eea9b27ca5f2ea741"
	I1212 00:35:22.273377  307671 cri.go:89] found id: "7c809ac147187b6cddcd652c01f5ca9264456d90ed5b50ad626d6a454686cea5"
	I1212 00:35:22.273381  307671 cri.go:89] found id: "3c9dcfc0a39b026ebae759fa5108c0241e5b83a83111d1a9c78bde6446593eed"
	I1212 00:35:22.273384  307671 cri.go:89] found id: "06a70ae8e015e6ed73dee2a76a938f3ac6d6569ec98202ecea145fbd5fdd2e6e"
	I1212 00:35:22.273386  307671 cri.go:89] found id: "0427b9f7a3afccc4063d5b86ecc7f448e0192c4c5812428503d97bac33bfe9cd"
	I1212 00:35:22.273390  307671 cri.go:89] found id: "6eb73517ebee81333693231ad0296e35a758eeb5aa9cc45c7ee7663ea6e652e4"
	I1212 00:35:22.273392  307671 cri.go:89] found id: "07f4a35d8d4d1bc19d0c0dc6b015f381bc482dc379a9a416e57528498aa8e1e5"
	I1212 00:35:22.273396  307671 cri.go:89] found id: "a4dfa21dd3b082f3a9464428da14520464aa97d1551c9a8ddffbf56d063877a6"
	I1212 00:35:22.273400  307671 cri.go:89] found id: "3da8c9c634a0528028b8a0875544b6a0c41e72dc8bf1ff1da95beccf80094376"
	I1212 00:35:22.273419  307671 cri.go:89] found id: "7d19b30263df4363fbd74e18b1a88cb116c9d902be0ebb3b0098d7308ab71d50"
	I1212 00:35:22.273424  307671 cri.go:89] found id: "70c68a457925502a9aee2ba9aecc60dbf1e189971126f7728a2e5d3dad2af8c7"
	I1212 00:35:22.273428  307671 cri.go:89] found id: ""
	I1212 00:35:22.273469  307671 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:35:22.286523  307671 out.go:203] 
	W1212 00:35:22.287592  307671 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:35:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:35:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:35:22.287607  307671 out.go:285] * 
	* 
	W1212 00:35:22.292335  307671 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:35:22.293717  307671 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-858659 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-858659
helpers_test.go:244: (dbg) docker inspect embed-certs-858659:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705",
	        "Created": "2025-12-12T00:33:20.451187787Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 292428,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:34:22.437163096Z",
	            "FinishedAt": "2025-12-12T00:34:21.472806865Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705/hostname",
	        "HostsPath": "/var/lib/docker/containers/feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705/hosts",
	        "LogPath": "/var/lib/docker/containers/feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705/feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705-json.log",
	        "Name": "/embed-certs-858659",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-858659:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-858659",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705",
	                "LowerDir": "/var/lib/docker/overlay2/7f6ddba1e54d34c41adb7c12d95ba48c15153bae18741ec1ec818d27555aa5e9-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f6ddba1e54d34c41adb7c12d95ba48c15153bae18741ec1ec818d27555aa5e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f6ddba1e54d34c41adb7c12d95ba48c15153bae18741ec1ec818d27555aa5e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f6ddba1e54d34c41adb7c12d95ba48c15153bae18741ec1ec818d27555aa5e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-858659",
	                "Source": "/var/lib/docker/volumes/embed-certs-858659/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-858659",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-858659",
	                "name.minikube.sigs.k8s.io": "embed-certs-858659",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9177ca98739f469da2b7ee8e7cb53794ae2e804a71bdc85b19de9b0066d541b1",
	            "SandboxKey": "/var/run/docker/netns/9177ca98739f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-858659": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d0c60cc9085547f771925e29c2f723fccc6f964f64d3d64910bb19e85e09e545",
	                    "EndpointID": "49d5fd8b1aa19317bbcb648ace278a24c31dbd8c0d280910efe99d37b9f06433",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "b2:b3:21:ac:19:7a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-858659",
	                        "feaf39a3749e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-858659 -n embed-certs-858659
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-858659 -n embed-certs-858659: exit status 2 (321.519657ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-858659 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-858659 logs -n 25: (1.223545446s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p old-k8s-version-743506 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ stop    │ -p old-k8s-version-743506 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable metrics-server -p no-preload-675290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ stop    │ -p no-preload-675290 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-858659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ stop    │ -p embed-certs-858659 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-743506 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p no-preload-675290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-858659 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:35 UTC │
	│ image   │ old-k8s-version-743506 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p old-k8s-version-743506 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p old-k8s-version-743506                                                                                                                                                                                                                            │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ image   │ no-preload-675290 image list --format=json                                                                                                                                                                                                           │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p no-preload-675290 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p old-k8s-version-743506                                                                                                                                                                                                                            │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ delete  │ -p disable-driver-mounts-039387                                                                                                                                                                                                                      │ disable-driver-mounts-039387 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p default-k8s-diff-port-079970 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079970 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p no-preload-675290                                                                                                                                                                                                                                 │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:35 UTC │
	│ delete  │ -p no-preload-675290                                                                                                                                                                                                                                 │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ start   │ -p newest-cni-821472 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ image   │ embed-certs-858659 image list --format=json                                                                                                                                                                                                          │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ pause   │ -p embed-certs-858659 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:35:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:35:01.676845  302677 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:01.676993  302677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:01.677000  302677 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:01.677007  302677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:01.677289  302677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:35:01.677788  302677 out.go:368] Setting JSON to false
	I1212 00:35:01.679324  302677 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4648,"bootTime":1765495054,"procs":422,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:35:01.679417  302677 start.go:143] virtualization: kvm guest
	I1212 00:35:01.681915  302677 out.go:179] * [newest-cni-821472] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:35:01.683469  302677 notify.go:221] Checking for updates...
	I1212 00:35:01.684214  302677 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:35:01.685594  302677 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:35:01.687066  302677 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:35:01.689936  302677 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:35:01.694337  302677 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:35:01.696522  302677 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:35:01.698402  302677 config.go:182] Loaded profile config "default-k8s-diff-port-079970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:35:01.698569  302677 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:35:01.698687  302677 config.go:182] Loaded profile config "kubernetes-upgrade-605797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:01.698885  302677 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:35:01.726288  302677 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:35:01.726426  302677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:01.811670  302677 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-12 00:35:01.799849378 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:35:01.811833  302677 docker.go:319] overlay module found
	I1212 00:35:01.813434  302677 out.go:179] * Using the docker driver based on user configuration
	I1212 00:35:01.814701  302677 start.go:309] selected driver: docker
	I1212 00:35:01.814716  302677 start.go:927] validating driver "docker" against <nil>
	I1212 00:35:01.814728  302677 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:35:01.815393  302677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:01.879143  302677 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-12 00:35:01.868556648 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:35:01.879427  302677 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1212 00:35:01.879507  302677 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1212 00:35:01.879785  302677 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 00:35:01.882050  302677 out.go:179] * Using Docker driver with root privileges
	I1212 00:35:01.883265  302677 cni.go:84] Creating CNI manager for ""
	I1212 00:35:01.883332  302677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:01.883343  302677 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:35:01.883409  302677 start.go:353] cluster config:
	{Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:01.885115  302677 out.go:179] * Starting "newest-cni-821472" primary control-plane node in "newest-cni-821472" cluster
	I1212 00:35:01.886758  302677 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:35:01.888146  302677 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:35:01.889263  302677 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:01.889310  302677 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1212 00:35:01.889328  302677 cache.go:65] Caching tarball of preloaded images
	I1212 00:35:01.889363  302677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:35:01.889427  302677 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:35:01.889449  302677 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 00:35:01.889596  302677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/config.json ...
	I1212 00:35:01.889624  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/config.json: {Name:mk8e6ad7ce238dbea537fa1dd3602a56e56a71c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:01.912683  302677 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:35:01.912708  302677 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:35:01.912721  302677 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:35:01.912755  302677 start.go:360] acquireMachinesLock for newest-cni-821472: {Name:mk1920b4afd40f764aad092389429d0db04875a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:35:01.912844  302677 start.go:364] duration metric: took 68.015µs to acquireMachinesLock for "newest-cni-821472"
	I1212 00:35:01.912868  302677 start.go:93] Provisioning new machine with config: &{Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:35:01.912974  302677 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:34:57.061451  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:57.061496  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:57.078642  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:57.078672  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:57.147934  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:57.147960  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:57.147976  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:57.187226  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:57.187259  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:57.217714  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:34:57.217752  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:57.247042  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:57.247073  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:57.306744  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:57.306777  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:59.839534  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:59.839948  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:34:59.839995  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:59.840045  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:59.865677  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:59.865695  263844 cri.go:89] found id: ""
	I1212 00:34:59.865702  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:34:59.865745  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:59.869496  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:59.869563  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:59.894925  263844 cri.go:89] found id: ""
	I1212 00:34:59.894952  263844 logs.go:282] 0 containers: []
	W1212 00:34:59.894961  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:59.894969  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:59.895019  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:59.919987  263844 cri.go:89] found id: ""
	I1212 00:34:59.920013  263844 logs.go:282] 0 containers: []
	W1212 00:34:59.920027  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:59.920035  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:59.920089  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:59.949410  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:59.949436  263844 cri.go:89] found id: ""
	I1212 00:34:59.949445  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:59.949527  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:59.953413  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:59.953491  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:59.981859  263844 cri.go:89] found id: ""
	I1212 00:34:59.981886  263844 logs.go:282] 0 containers: []
	W1212 00:34:59.981897  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:59.981905  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:59.981958  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:00.014039  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:00.014064  263844 cri.go:89] found id: ""
	I1212 00:35:00.014093  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:00.014157  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:00.019033  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:00.019101  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:00.047071  263844 cri.go:89] found id: ""
	I1212 00:35:00.047098  263844 logs.go:282] 0 containers: []
	W1212 00:35:00.047110  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:00.047132  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:00.047182  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:00.074181  263844 cri.go:89] found id: ""
	I1212 00:35:00.074200  263844 logs.go:282] 0 containers: []
	W1212 00:35:00.074214  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:00.074222  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:00.074234  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:00.151605  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:00.151642  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:00.166946  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:00.166974  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:00.225208  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:00.225231  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:00.225245  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:00.258713  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:00.258739  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:00.286626  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:00.286654  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:00.311078  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:00.311101  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:00.366630  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:00.366659  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 00:34:57.522454  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:35:00.022987  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	I1212 00:35:01.428421  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Running}}
	I1212 00:35:01.447336  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Status}}
	I1212 00:35:01.468796  300250 cli_runner.go:164] Run: docker exec default-k8s-diff-port-079970 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:35:01.515960  300250 oci.go:144] the created container "default-k8s-diff-port-079970" has a running status.
	I1212 00:35:01.515989  300250 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa...
	I1212 00:35:01.609883  300250 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:35:01.636104  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Status}}
	I1212 00:35:01.657225  300250 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:35:01.657249  300250 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-079970 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:35:01.719543  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Status}}
	I1212 00:35:01.742574  300250 machine.go:94] provisionDockerMachine start ...
	I1212 00:35:01.742665  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:01.772749  300250 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:01.773093  300250 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1212 00:35:01.773112  300250 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:35:01.775590  300250 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58832->127.0.0.1:33088: read: connection reset by peer
	I1212 00:35:04.907004  300250 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-079970
	
	I1212 00:35:04.907029  300250 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-079970"
	I1212 00:35:04.907083  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:04.925088  300250 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:04.925306  300250 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1212 00:35:04.925325  300250 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-079970 && echo "default-k8s-diff-port-079970" | sudo tee /etc/hostname
	I1212 00:35:05.104218  300250 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-079970
	
	I1212 00:35:05.104321  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:05.122538  300250 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:05.122808  300250 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1212 00:35:05.122836  300250 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-079970' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-079970/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-079970' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:35:05.254825  300250 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:35:05.254864  300250 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:35:05.254892  300250 ubuntu.go:190] setting up certificates
	I1212 00:35:05.254908  300250 provision.go:84] configureAuth start
	I1212 00:35:05.254998  300250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-079970
	I1212 00:35:05.273295  300250 provision.go:143] copyHostCerts
	I1212 00:35:05.273350  300250 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:35:05.273360  300250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:35:05.273417  300250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:35:05.273528  300250 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:35:05.273537  300250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:35:05.273566  300250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:35:05.273626  300250 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:35:05.273633  300250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:35:05.273656  300250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:35:05.273705  300250 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-079970 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-079970 localhost minikube]
	I1212 00:35:05.296916  300250 provision.go:177] copyRemoteCerts
	I1212 00:35:05.296964  300250 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:35:05.296999  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:05.315032  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:05.408808  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:35:05.456913  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1212 00:35:05.473342  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:35:05.490425  300250 provision.go:87] duration metric: took 235.494662ms to configureAuth
	I1212 00:35:05.490445  300250 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:35:05.490622  300250 config.go:182] Loaded profile config "default-k8s-diff-port-079970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:35:05.490727  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:05.507963  300250 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:05.508252  300250 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1212 00:35:05.508277  300250 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:35:01.914973  302677 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 00:35:01.915193  302677 start.go:159] libmachine.API.Create for "newest-cni-821472" (driver="docker")
	I1212 00:35:01.915221  302677 client.go:173] LocalClient.Create starting
	I1212 00:35:01.915286  302677 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem
	I1212 00:35:01.915328  302677 main.go:143] libmachine: Decoding PEM data...
	I1212 00:35:01.915347  302677 main.go:143] libmachine: Parsing certificate...
	I1212 00:35:01.915411  302677 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem
	I1212 00:35:01.915433  302677 main.go:143] libmachine: Decoding PEM data...
	I1212 00:35:01.915443  302677 main.go:143] libmachine: Parsing certificate...
	I1212 00:35:01.915761  302677 cli_runner.go:164] Run: docker network inspect newest-cni-821472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:35:01.931948  302677 cli_runner.go:211] docker network inspect newest-cni-821472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:35:01.932008  302677 network_create.go:284] running [docker network inspect newest-cni-821472] to gather additional debugging logs...
	I1212 00:35:01.932032  302677 cli_runner.go:164] Run: docker network inspect newest-cni-821472
	W1212 00:35:01.950167  302677 cli_runner.go:211] docker network inspect newest-cni-821472 returned with exit code 1
	I1212 00:35:01.950190  302677 network_create.go:287] error running [docker network inspect newest-cni-821472]: docker network inspect newest-cni-821472: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-821472 not found
	I1212 00:35:01.950205  302677 network_create.go:289] output of [docker network inspect newest-cni-821472]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-821472 not found
	
	** /stderr **
	I1212 00:35:01.950338  302677 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:35:01.968471  302677 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ad72354fdcf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:1e:a7:00:22:62} reservation:<nil>}
	I1212 00:35:01.969916  302677 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-18bf2e8051c8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:8e:8f:a6:9d:0c} reservation:<nil>}
	I1212 00:35:01.970685  302677 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e5251ddf0a35 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:5a:cf:fd:1d:2a} reservation:<nil>}
	I1212 00:35:01.971365  302677 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b830d0}
	I1212 00:35:01.971387  302677 network_create.go:124] attempt to create docker network newest-cni-821472 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1212 00:35:01.971442  302677 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-821472 newest-cni-821472
	I1212 00:35:02.021201  302677 network_create.go:108] docker network newest-cni-821472 192.168.76.0/24 created
	I1212 00:35:02.021230  302677 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-821472" container
	I1212 00:35:02.021299  302677 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:35:02.039525  302677 cli_runner.go:164] Run: docker volume create newest-cni-821472 --label name.minikube.sigs.k8s.io=newest-cni-821472 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:35:02.058234  302677 oci.go:103] Successfully created a docker volume newest-cni-821472
	I1212 00:35:02.058323  302677 cli_runner.go:164] Run: docker run --rm --name newest-cni-821472-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-821472 --entrypoint /usr/bin/test -v newest-cni-821472:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 00:35:02.440426  302677 oci.go:107] Successfully prepared a docker volume newest-cni-821472
	I1212 00:35:02.440502  302677 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:02.440515  302677 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:35:02.440587  302677 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-821472:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:35:06.233731  302677 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-821472:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.793098349s)
	I1212 00:35:06.233764  302677 kic.go:203] duration metric: took 3.793245572s to extract preloaded images to volume ...
	W1212 00:35:06.233853  302677 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 00:35:06.233893  302677 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 00:35:06.233935  302677 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:35:06.298519  302677 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-821472 --name newest-cni-821472 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-821472 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-821472 --network newest-cni-821472 --ip 192.168.76.2 --volume newest-cni-821472:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 00:35:06.590905  302677 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Running}}
	I1212 00:35:06.609781  302677 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:06.629292  302677 cli_runner.go:164] Run: docker exec newest-cni-821472 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:35:02.896994  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:02.897634  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:02.897694  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:02.897758  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:02.924538  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:02.924572  263844 cri.go:89] found id: ""
	I1212 00:35:02.924582  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:02.924639  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:02.928500  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:02.928556  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:02.957640  263844 cri.go:89] found id: ""
	I1212 00:35:02.957663  263844 logs.go:282] 0 containers: []
	W1212 00:35:02.957675  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:02.957682  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:02.957749  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:02.984525  263844 cri.go:89] found id: ""
	I1212 00:35:02.984548  263844 logs.go:282] 0 containers: []
	W1212 00:35:02.984558  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:02.984566  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:02.984634  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:03.010719  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:03.010742  263844 cri.go:89] found id: ""
	I1212 00:35:03.010751  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:03.010804  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:03.014657  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:03.014720  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:03.040940  263844 cri.go:89] found id: ""
	I1212 00:35:03.040963  263844 logs.go:282] 0 containers: []
	W1212 00:35:03.040973  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:03.040980  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:03.041037  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:03.067543  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:03.067569  263844 cri.go:89] found id: ""
	I1212 00:35:03.067580  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:03.067641  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:03.071653  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:03.071720  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:03.099943  263844 cri.go:89] found id: ""
	I1212 00:35:03.099969  263844 logs.go:282] 0 containers: []
	W1212 00:35:03.099980  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:03.099988  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:03.100045  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:03.127350  263844 cri.go:89] found id: ""
	I1212 00:35:03.127376  263844 logs.go:282] 0 containers: []
	W1212 00:35:03.127388  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:03.127400  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:03.127416  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:03.208587  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:03.208622  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:03.223382  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:03.223406  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:03.287357  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:03.287381  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:03.287402  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:03.320653  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:03.320683  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:03.347455  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:03.347501  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:03.377133  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:03.377159  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:03.442737  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:03.442776  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:05.978082  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:05.978455  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:05.978530  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:05.978587  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:06.006237  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:06.006255  263844 cri.go:89] found id: ""
	I1212 00:35:06.006262  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:06.006310  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:06.010366  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:06.010456  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:06.039694  263844 cri.go:89] found id: ""
	I1212 00:35:06.039716  263844 logs.go:282] 0 containers: []
	W1212 00:35:06.039725  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:06.039733  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:06.039785  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:06.065594  263844 cri.go:89] found id: ""
	I1212 00:35:06.065618  263844 logs.go:282] 0 containers: []
	W1212 00:35:06.065628  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:06.065639  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:06.065685  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:06.091181  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:06.091200  263844 cri.go:89] found id: ""
	I1212 00:35:06.091207  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:06.091259  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:06.094942  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:06.095000  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:06.118810  263844 cri.go:89] found id: ""
	I1212 00:35:06.118827  263844 logs.go:282] 0 containers: []
	W1212 00:35:06.118834  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:06.118839  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:06.118881  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:06.143665  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:06.143686  263844 cri.go:89] found id: ""
	I1212 00:35:06.143694  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:06.143746  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:06.147318  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:06.147376  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:06.172858  263844 cri.go:89] found id: ""
	I1212 00:35:06.172883  263844 logs.go:282] 0 containers: []
	W1212 00:35:06.172893  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:06.172901  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:06.172943  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:06.198461  263844 cri.go:89] found id: ""
	I1212 00:35:06.198494  263844 logs.go:282] 0 containers: []
	W1212 00:35:06.198503  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:06.198514  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:06.198529  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:06.224744  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:06.224766  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:06.291819  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:06.291848  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:06.324908  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:06.324941  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:06.411894  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:06.411924  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:06.430682  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:06.430713  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:06.498584  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:06.498606  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:06.498619  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:06.531410  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:06.531436  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	W1212 00:35:02.521725  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:35:05.022110  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	I1212 00:35:07.023465  292217 pod_ready.go:94] pod "coredns-66bc5c9577-8x66p" is "Ready"
	I1212 00:35:07.023510  292217 pod_ready.go:86] duration metric: took 34.506810459s for pod "coredns-66bc5c9577-8x66p" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.026308  292217 pod_ready.go:83] waiting for pod "etcd-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.031339  292217 pod_ready.go:94] pod "etcd-embed-certs-858659" is "Ready"
	I1212 00:35:07.031364  292217 pod_ready.go:86] duration metric: took 5.030782ms for pod "etcd-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.033638  292217 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.037393  292217 pod_ready.go:94] pod "kube-apiserver-embed-certs-858659" is "Ready"
	I1212 00:35:07.037409  292217 pod_ready.go:86] duration metric: took 3.7473ms for pod "kube-apiserver-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.039158  292217 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:05.994458  300250 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:35:05.994507  300250 machine.go:97] duration metric: took 4.251910101s to provisionDockerMachine
	I1212 00:35:05.994521  300250 client.go:176] duration metric: took 10.066395512s to LocalClient.Create
	I1212 00:35:05.994537  300250 start.go:167] duration metric: took 10.066456505s to libmachine.API.Create "default-k8s-diff-port-079970"
	I1212 00:35:05.994547  300250 start.go:293] postStartSetup for "default-k8s-diff-port-079970" (driver="docker")
	I1212 00:35:05.994559  300250 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:35:05.994632  300250 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:35:05.994709  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:06.013629  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:06.193644  300250 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:35:06.197617  300250 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:35:06.197647  300250 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:35:06.197663  300250 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:35:06.197726  300250 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:35:06.197851  300250 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:35:06.197983  300250 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:35:06.206129  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:06.228239  300250 start.go:296] duration metric: took 233.677045ms for postStartSetup
	I1212 00:35:06.228629  300250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-079970
	I1212 00:35:06.249261  300250 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/config.json ...
	I1212 00:35:06.249602  300250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:35:06.249656  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:06.274204  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:06.370994  300250 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:35:06.375520  300250 start.go:128] duration metric: took 10.449576963s to createHost
	I1212 00:35:06.375543  300250 start.go:83] releasing machines lock for "default-k8s-diff-port-079970", held for 10.449707099s
	I1212 00:35:06.375608  300250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-079970
	I1212 00:35:06.394357  300250 ssh_runner.go:195] Run: cat /version.json
	I1212 00:35:06.394412  300250 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:35:06.394417  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:06.394533  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:06.414536  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:06.414896  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:06.513946  300250 ssh_runner.go:195] Run: systemctl --version
	I1212 00:35:06.578632  300250 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:35:06.614844  300250 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:35:06.619698  300250 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:35:06.619768  300250 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:35:06.648376  300250 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:35:06.648400  300250 start.go:496] detecting cgroup driver to use...
	I1212 00:35:06.648437  300250 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:35:06.648518  300250 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:35:06.671629  300250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:35:06.686247  300250 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:35:06.686300  300250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:35:06.705702  300250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:35:06.724679  300250 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:35:06.831577  300250 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:35:06.959398  300250 docker.go:234] disabling docker service ...
	I1212 00:35:06.959458  300250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:35:06.980174  300250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:35:06.994308  300250 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:35:07.087727  300250 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:35:07.164553  300250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:35:07.176958  300250 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:35:07.190774  300250 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:35:07.190827  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.200432  300250 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:35:07.200487  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.208778  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.216974  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.225627  300250 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:35:07.233231  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.241973  300250 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.256118  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.264415  300250 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:35:07.271752  300250 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:35:07.278861  300250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:07.354779  300250 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:35:07.473228  300250 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:35:07.473293  300250 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:35:07.476916  300250 start.go:564] Will wait 60s for crictl version
	I1212 00:35:07.476963  300250 ssh_runner.go:195] Run: which crictl
	I1212 00:35:07.480267  300250 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:35:07.504728  300250 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:35:07.504799  300250 ssh_runner.go:195] Run: crio --version
	I1212 00:35:07.530870  300250 ssh_runner.go:195] Run: crio --version
	I1212 00:35:07.558537  300250 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:35:07.220781  292217 pod_ready.go:94] pod "kube-controller-manager-embed-certs-858659" is "Ready"
	I1212 00:35:07.220800  292217 pod_ready.go:86] duration metric: took 181.625687ms for pod "kube-controller-manager-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.420602  292217 pod_ready.go:83] waiting for pod "kube-proxy-httpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.820186  292217 pod_ready.go:94] pod "kube-proxy-httpr" is "Ready"
	I1212 00:35:07.820209  292217 pod_ready.go:86] duration metric: took 399.582316ms for pod "kube-proxy-httpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:08.021113  292217 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:08.420866  292217 pod_ready.go:94] pod "kube-scheduler-embed-certs-858659" is "Ready"
	I1212 00:35:08.420892  292217 pod_ready.go:86] duration metric: took 399.752565ms for pod "kube-scheduler-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:08.420907  292217 pod_ready.go:40] duration metric: took 35.909864777s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:35:08.463083  292217 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 00:35:08.464948  292217 out.go:179] * Done! kubectl is now configured to use "embed-certs-858659" cluster and "default" namespace by default
	I1212 00:35:07.559551  300250 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-079970 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:35:07.576167  300250 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1212 00:35:07.580098  300250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:07.589991  300250 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-079970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079970 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:35:07.590086  300250 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:35:07.590127  300250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:07.620329  300250 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:07.620351  300250 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:35:07.620400  300250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:07.644998  300250 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:07.645015  300250 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:35:07.645022  300250 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.2 crio true true} ...
	I1212 00:35:07.645104  300250 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-079970 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:35:07.645170  300250 ssh_runner.go:195] Run: crio config
	I1212 00:35:07.690243  300250 cni.go:84] Creating CNI manager for ""
	I1212 00:35:07.690273  300250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:07.690294  300250 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:35:07.690324  300250 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-079970 NodeName:default-k8s-diff-port-079970 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:35:07.690467  300250 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-079970"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:35:07.690549  300250 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:35:07.698235  300250 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:35:07.698297  300250 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:35:07.705827  300250 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1212 00:35:07.717731  300250 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:35:07.731840  300250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1212 00:35:07.743718  300250 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:35:07.747092  300250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:07.756184  300250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:07.837070  300250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:07.858364  300250 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970 for IP: 192.168.103.2
	I1212 00:35:07.858382  300250 certs.go:195] generating shared ca certs ...
	I1212 00:35:07.858403  300250 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:07.858571  300250 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:35:07.858647  300250 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:35:07.858665  300250 certs.go:257] generating profile certs ...
	I1212 00:35:07.858744  300250 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.key
	I1212 00:35:07.858767  300250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.crt with IP's: []
	I1212 00:35:08.016689  300250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.crt ...
	I1212 00:35:08.016714  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.crt: {Name:mk279da736f294eb825962e1a4edee25eac6315c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.016870  300250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.key ...
	I1212 00:35:08.016882  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.key: {Name:mkfdedaa1208476212f31534b586a502d7549554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.016963  300250 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key.0181c396
	I1212 00:35:08.016979  300250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt.0181c396 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1212 00:35:08.149793  300250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt.0181c396 ...
	I1212 00:35:08.149819  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt.0181c396: {Name:mka877547c686ae761ca469a57d769f1b6209dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.149964  300250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key.0181c396 ...
	I1212 00:35:08.149977  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key.0181c396: {Name:mk85552646d74132d4452b62dd9a7c99b447f23e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.150046  300250 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt.0181c396 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt
	I1212 00:35:08.150128  300250 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key.0181c396 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key
	I1212 00:35:08.150185  300250 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.key
	I1212 00:35:08.150200  300250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.crt with IP's: []
	I1212 00:35:08.211959  300250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.crt ...
	I1212 00:35:08.211982  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.crt: {Name:mkf49b46b9109148320675b03a3fb18a2de5b067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.212121  300250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.key ...
	I1212 00:35:08.212135  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.key: {Name:mke8549528b3e93e632d75bea181d060323f48f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.212313  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:35:08.212351  300250 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:35:08.212361  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:35:08.212385  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:35:08.212408  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:35:08.212438  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:35:08.212491  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:08.213045  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:35:08.231539  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:35:08.248058  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:35:08.265115  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:35:08.282076  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 00:35:08.298207  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:35:08.314097  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:35:08.329970  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:35:08.345884  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:35:08.363142  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:35:08.379029  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:35:08.394643  300250 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:35:08.405961  300250 ssh_runner.go:195] Run: openssl version
	I1212 00:35:08.411533  300250 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:08.418428  300250 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:35:08.425583  300250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:08.429027  300250 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:08.429071  300250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:08.466251  300250 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:35:08.473689  300250 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:35:08.482528  300250 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:35:08.493011  300250 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:35:08.500005  300250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:35:08.503427  300250 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:35:08.503503  300250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:35:08.544140  300250 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:35:08.552101  300250 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:35:08.560501  300250 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:35:08.569346  300250 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:35:08.578244  300250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:35:08.582362  300250 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:35:08.582410  300250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:35:08.623691  300250 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:08.631946  300250 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:08.639036  300250 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:35:08.642462  300250 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:35:08.642530  300250 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-079970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079970 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:08.642593  300250 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:35:08.642659  300250 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:35:08.668461  300250 cri.go:89] found id: ""
	I1212 00:35:08.668546  300250 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:35:08.676116  300250 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:35:08.683428  300250 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:35:08.683469  300250 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:35:08.690898  300250 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:35:08.690916  300250 kubeadm.go:158] found existing configuration files:
	
	I1212 00:35:08.690954  300250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 00:35:08.698073  300250 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:35:08.698143  300250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:35:08.705407  300250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 00:35:08.712693  300250 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:35:08.712737  300250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:35:08.719767  300250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 00:35:08.727185  300250 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:35:08.727228  300250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:35:08.736123  300250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 00:35:08.745640  300250 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:35:08.745692  300250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:35:08.752893  300250 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:35:08.793583  300250 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 00:35:08.793654  300250 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:35:08.815256  300250 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:35:08.815324  300250 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 00:35:08.815354  300250 kubeadm.go:319] OS: Linux
	I1212 00:35:08.815415  300250 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:35:08.815505  300250 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:35:08.815591  300250 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:35:08.815662  300250 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:35:08.815736  300250 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:35:08.815811  300250 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:35:08.815859  300250 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:35:08.815899  300250 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 00:35:08.870824  300250 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:35:08.870961  300250 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:35:08.871069  300250 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:35:08.879811  300250 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:35:08.882551  300250 out.go:252]   - Generating certificates and keys ...
	I1212 00:35:08.882660  300250 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:35:08.882751  300250 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:35:09.176928  300250 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:35:09.363520  300250 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:35:09.693528  300250 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:35:09.731067  300250 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:35:09.965845  300250 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:35:09.966025  300250 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-079970 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1212 00:35:10.193416  300250 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:35:10.193638  300250 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-079970 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1212 00:35:10.269537  300250 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:35:06.679223  302677 oci.go:144] the created container "newest-cni-821472" has a running status.
	I1212 00:35:06.679254  302677 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa...
	I1212 00:35:06.738145  302677 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:35:06.769410  302677 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:06.786614  302677 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:35:06.786644  302677 kic_runner.go:114] Args: [docker exec --privileged newest-cni-821472 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:35:06.834201  302677 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:06.855717  302677 machine.go:94] provisionDockerMachine start ...
	I1212 00:35:06.855800  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:06.888834  302677 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:06.889195  302677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 00:35:06.889213  302677 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:35:06.890003  302677 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43636->127.0.0.1:33093: read: connection reset by peer
	I1212 00:35:10.020812  302677 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-821472
	
	I1212 00:35:10.020840  302677 ubuntu.go:182] provisioning hostname "newest-cni-821472"
	I1212 00:35:10.020911  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:10.038793  302677 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:10.039049  302677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 00:35:10.039068  302677 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-821472 && echo "newest-cni-821472" | sudo tee /etc/hostname
	I1212 00:35:10.178884  302677 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-821472
	
	I1212 00:35:10.178957  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:10.196926  302677 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:10.197149  302677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 00:35:10.197168  302677 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-821472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-821472/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-821472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:35:10.325457  302677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:35:10.325500  302677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:35:10.325523  302677 ubuntu.go:190] setting up certificates
	I1212 00:35:10.325534  302677 provision.go:84] configureAuth start
	I1212 00:35:10.325583  302677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:10.343437  302677 provision.go:143] copyHostCerts
	I1212 00:35:10.343515  302677 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:35:10.343528  302677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:35:10.343592  302677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:35:10.343673  302677 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:35:10.343682  302677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:35:10.343707  302677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:35:10.343760  302677 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:35:10.343767  302677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:35:10.343788  302677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:35:10.343834  302677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.newest-cni-821472 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-821472]
	I1212 00:35:10.384676  302677 provision.go:177] copyRemoteCerts
	I1212 00:35:10.384719  302677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:35:10.384774  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:10.402191  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:10.496002  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:35:10.517047  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:35:10.533068  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:35:10.549151  302677 provision.go:87] duration metric: took 223.607325ms to configureAuth
	I1212 00:35:10.549173  302677 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:35:10.549372  302677 config.go:182] Loaded profile config "newest-cni-821472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:10.549507  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:10.566450  302677 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:10.566801  302677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 00:35:10.566825  302677 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:35:10.840386  302677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:35:10.840420  302677 machine.go:97] duration metric: took 3.984678656s to provisionDockerMachine
	I1212 00:35:10.840443  302677 client.go:176] duration metric: took 8.925215047s to LocalClient.Create
	I1212 00:35:10.840469  302677 start.go:167] duration metric: took 8.925275616s to libmachine.API.Create "newest-cni-821472"
	I1212 00:35:10.840505  302677 start.go:293] postStartSetup for "newest-cni-821472" (driver="docker")
	I1212 00:35:10.840523  302677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:35:10.840596  302677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:35:10.840670  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:10.860332  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:10.956443  302677 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:35:10.959764  302677 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:35:10.959792  302677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:35:10.959804  302677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:35:10.959857  302677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:35:10.959954  302677 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:35:10.960087  302677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:35:10.967140  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:10.985778  302677 start.go:296] duration metric: took 145.261225ms for postStartSetup
	I1212 00:35:10.986158  302677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:11.004009  302677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/config.json ...
	I1212 00:35:11.004287  302677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:35:11.004341  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:11.021447  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:11.111879  302677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:35:11.116117  302677 start.go:128] duration metric: took 9.20313041s to createHost
	I1212 00:35:11.116139  302677 start.go:83] releasing machines lock for "newest-cni-821472", held for 9.203283528s
	I1212 00:35:11.116244  302677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:11.133795  302677 ssh_runner.go:195] Run: cat /version.json
	I1212 00:35:11.133840  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:11.133860  302677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:35:11.133944  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:11.152108  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:11.153248  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:11.246020  302677 ssh_runner.go:195] Run: systemctl --version
	I1212 00:35:11.300646  302677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:35:11.334446  302677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:35:11.338640  302677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:35:11.338700  302677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:35:11.362650  302677 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:35:11.362667  302677 start.go:496] detecting cgroup driver to use...
	I1212 00:35:11.362703  302677 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:35:11.362745  302677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:35:11.378085  302677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:35:11.389462  302677 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:35:11.389524  302677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:35:11.405814  302677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:35:11.421748  302677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:35:11.501993  302677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:35:11.586974  302677 docker.go:234] disabling docker service ...
	I1212 00:35:11.587043  302677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:35:11.606777  302677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:35:11.618506  302677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:35:09.059116  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:09.059525  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:09.059584  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:09.059632  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:09.089294  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:09.089312  263844 cri.go:89] found id: ""
	I1212 00:35:09.089319  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:09.089386  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:09.093045  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:09.093104  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:09.118495  263844 cri.go:89] found id: ""
	I1212 00:35:09.118518  263844 logs.go:282] 0 containers: []
	W1212 00:35:09.118527  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:09.118535  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:09.118588  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:09.143107  263844 cri.go:89] found id: ""
	I1212 00:35:09.143128  263844 logs.go:282] 0 containers: []
	W1212 00:35:09.143137  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:09.143144  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:09.143196  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:09.167389  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:09.167405  263844 cri.go:89] found id: ""
	I1212 00:35:09.167414  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:09.167459  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:09.171059  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:09.171107  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:09.195715  263844 cri.go:89] found id: ""
	I1212 00:35:09.195735  263844 logs.go:282] 0 containers: []
	W1212 00:35:09.195744  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:09.195752  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:09.195808  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:09.220144  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:09.220165  263844 cri.go:89] found id: ""
	I1212 00:35:09.220174  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:09.220239  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:09.223868  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:09.223918  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:09.248862  263844 cri.go:89] found id: ""
	I1212 00:35:09.248885  263844 logs.go:282] 0 containers: []
	W1212 00:35:09.248893  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:09.248900  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:09.248955  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:09.275068  263844 cri.go:89] found id: ""
	I1212 00:35:09.275093  263844 logs.go:282] 0 containers: []
	W1212 00:35:09.275102  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:09.275114  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:09.275127  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:09.366099  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:09.366126  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:09.379543  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:09.379566  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:09.432943  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:09.432958  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:09.432969  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:09.463524  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:09.463547  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:09.490294  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:09.490317  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:09.514799  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:09.514828  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:09.571564  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:09.571589  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:10.788385  300250 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:35:11.057552  300250 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:35:11.057688  300250 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:35:11.225689  300250 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:35:11.681036  300250 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:35:11.823928  300250 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:35:11.882037  300250 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:35:12.127371  300250 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:35:12.128027  300250 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:35:12.132435  300250 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:35:11.703202  302677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:35:11.784049  302677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:35:11.794982  302677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:35:11.808444  302677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:35:11.808518  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.818076  302677 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:35:11.818133  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.826275  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.834017  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.841736  302677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:35:11.848971  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.856669  302677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.868903  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.876935  302677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:35:11.883730  302677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:35:11.890166  302677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:11.971228  302677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:35:12.107537  302677 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:35:12.107601  302677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:35:12.111902  302677 start.go:564] Will wait 60s for crictl version
	I1212 00:35:12.111953  302677 ssh_runner.go:195] Run: which crictl
	I1212 00:35:12.115979  302677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:35:12.143022  302677 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:35:12.143092  302677 ssh_runner.go:195] Run: crio --version
	I1212 00:35:12.172766  302677 ssh_runner.go:195] Run: crio --version
	I1212 00:35:12.214057  302677 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 00:35:12.215182  302677 cli_runner.go:164] Run: docker network inspect newest-cni-821472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:35:12.234840  302677 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 00:35:12.238650  302677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:12.250364  302677 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 00:35:12.251584  302677 kubeadm.go:884] updating cluster {Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:35:12.251750  302677 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:12.251814  302677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:12.286886  302677 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:12.286909  302677 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:35:12.286960  302677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:12.312952  302677 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:12.312977  302677 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:35:12.312986  302677 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 00:35:12.313097  302677 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-821472 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:35:12.313180  302677 ssh_runner.go:195] Run: crio config
	I1212 00:35:12.374192  302677 cni.go:84] Creating CNI manager for ""
	I1212 00:35:12.374218  302677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:12.374237  302677 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 00:35:12.374270  302677 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-821472 NodeName:newest-cni-821472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:35:12.374434  302677 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-821472"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:35:12.374535  302677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:35:12.383578  302677 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:35:12.383637  302677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:35:12.391434  302677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:35:12.404377  302677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:35:12.423679  302677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1212 00:35:12.436660  302677 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:35:12.440733  302677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:12.451195  302677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:12.537285  302677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:12.559799  302677 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472 for IP: 192.168.76.2
	I1212 00:35:12.559818  302677 certs.go:195] generating shared ca certs ...
	I1212 00:35:12.559838  302677 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.559992  302677 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:35:12.560043  302677 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:35:12.560055  302677 certs.go:257] generating profile certs ...
	I1212 00:35:12.560117  302677 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.key
	I1212 00:35:12.560142  302677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.crt with IP's: []
	I1212 00:35:12.605767  302677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.crt ...
	I1212 00:35:12.605794  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.crt: {Name:mk62a438d5b5213a1e604f2aad5a254998a9c462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.605974  302677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.key ...
	I1212 00:35:12.605988  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.key: {Name:mk63a3fc23e864057dcaa9c8effbe724759615bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.606086  302677 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key.e08375e0
	I1212 00:35:12.606104  302677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt.e08375e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1212 00:35:12.656429  302677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt.e08375e0 ...
	I1212 00:35:12.656451  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt.e08375e0: {Name:mkcaca46142fb6d4be74e9883db090ebf7e5cf3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.656597  302677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key.e08375e0 ...
	I1212 00:35:12.656613  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key.e08375e0: {Name:mkf4411e99bacc1b752d816d27a1043dcd50d436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.656687  302677 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt.e08375e0 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt
	I1212 00:35:12.656757  302677 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key.e08375e0 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key
	I1212 00:35:12.656810  302677 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key
	I1212 00:35:12.656825  302677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.crt with IP's: []
	I1212 00:35:12.794384  302677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.crt ...
	I1212 00:35:12.794408  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.crt: {Name:mk0075073046a87e6d2960d42dc638b72f046c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.794577  302677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key ...
	I1212 00:35:12.794595  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key: {Name:mk204e192392804e3969ede81a4f299490ab4215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.794762  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:35:12.794798  302677 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:35:12.794808  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:35:12.794839  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:35:12.794875  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:35:12.794899  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:35:12.794953  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:12.795698  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:35:12.813014  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:35:12.829220  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:35:12.845778  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:35:12.861807  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:35:12.878084  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:35:12.895917  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:35:12.912533  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:35:12.928436  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:35:12.946171  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:35:12.962496  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:35:12.978793  302677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:35:12.990605  302677 ssh_runner.go:195] Run: openssl version
	I1212 00:35:12.996246  302677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:35:13.003391  302677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:35:13.010336  302677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:35:13.013787  302677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:35:13.013843  302677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:35:13.047425  302677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:35:13.054597  302677 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:35:13.061519  302677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:35:13.068422  302677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:35:13.075772  302677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:35:13.079428  302677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:35:13.079485  302677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:35:13.113308  302677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:13.120292  302677 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:13.127088  302677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:13.133810  302677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:35:13.140741  302677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:13.144153  302677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:13.144193  302677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:13.177492  302677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:35:13.184429  302677 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:35:13.191968  302677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:35:13.196429  302677 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:35:13.196488  302677 kubeadm.go:401] StartCluster: {Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:13.196559  302677 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:35:13.196600  302677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:35:13.228866  302677 cri.go:89] found id: ""
	I1212 00:35:13.228944  302677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:35:13.238592  302677 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:35:13.246897  302677 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:35:13.247000  302677 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:35:13.254688  302677 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:35:13.254703  302677 kubeadm.go:158] found existing configuration files:
	
	I1212 00:35:13.254736  302677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:35:13.263097  302677 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:35:13.263270  302677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:35:13.273080  302677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:35:13.282211  302677 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:35:13.282268  302677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:35:13.291221  302677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:35:13.302552  302677 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:35:13.302604  302677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:35:13.313834  302677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:35:13.325420  302677 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:35:13.325564  302677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:35:13.336740  302677 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:35:13.383615  302677 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:35:13.383902  302677 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:35:13.451434  302677 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:35:13.451591  302677 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 00:35:13.451650  302677 kubeadm.go:319] OS: Linux
	I1212 00:35:13.451744  302677 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:35:13.451811  302677 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:35:13.451890  302677 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:35:13.451953  302677 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:35:13.452042  302677 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:35:13.452118  302677 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:35:13.452187  302677 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:35:13.452283  302677 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 00:35:13.513534  302677 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:35:13.513688  302677 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:35:13.513825  302677 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:35:13.521572  302677 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:35:13.524580  302677 out.go:252]   - Generating certificates and keys ...
	I1212 00:35:13.524672  302677 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:35:13.524815  302677 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:35:13.627518  302677 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:35:13.704108  302677 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:35:13.773103  302677 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:35:13.794355  302677 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:35:13.913209  302677 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:35:13.913398  302677 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-821472] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 00:35:13.947286  302677 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:35:13.947435  302677 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-821472] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 00:35:14.159651  302677 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:35:14.216650  302677 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:35:14.472885  302677 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:35:14.473061  302677 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:35:14.633454  302677 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:35:14.753641  302677 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:35:14.814612  302677 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:35:14.895913  302677 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:35:14.914803  302677 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:35:14.915315  302677 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:35:14.919969  302677 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:35:12.133773  300250 out.go:252]   - Booting up control plane ...
	I1212 00:35:12.133887  300250 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:35:12.134001  300250 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:35:12.135036  300250 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:35:12.151772  300250 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:35:12.151914  300250 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:35:12.159834  300250 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:35:12.160143  300250 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:35:12.160211  300250 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:35:12.271327  300250 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:35:12.271562  300250 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:35:12.773136  300250 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.727313ms
	I1212 00:35:12.778862  300250 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:35:12.778987  300250 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1212 00:35:12.779121  300250 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:35:12.779237  300250 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:35:14.491599  300250 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.712585121s
	I1212 00:35:15.317576  300250 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.53833817s
	I1212 00:35:14.921519  302677 out.go:252]   - Booting up control plane ...
	I1212 00:35:14.921651  302677 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:35:14.924749  302677 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:35:14.924842  302677 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:35:14.943271  302677 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:35:14.943423  302677 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:35:14.949795  302677 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:35:14.950158  302677 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:35:14.950225  302677 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:35:15.059243  302677 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:35:15.059468  302677 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:35:15.560643  302677 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.54438ms
	I1212 00:35:15.564093  302677 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:35:15.564228  302677 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1212 00:35:15.564345  302677 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:35:15.564457  302677 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:35:16.569708  302677 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005463869s
	I1212 00:35:12.100744  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:12.101121  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:12.101177  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:12.101233  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:12.130434  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:12.130457  263844 cri.go:89] found id: ""
	I1212 00:35:12.130467  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:12.130537  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:12.134919  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:12.134991  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:12.165688  263844 cri.go:89] found id: ""
	I1212 00:35:12.165712  263844 logs.go:282] 0 containers: []
	W1212 00:35:12.165723  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:12.165735  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:12.165800  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:12.200067  263844 cri.go:89] found id: ""
	I1212 00:35:12.200095  263844 logs.go:282] 0 containers: []
	W1212 00:35:12.200105  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:12.200114  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:12.200175  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:12.229100  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:12.229122  263844 cri.go:89] found id: ""
	I1212 00:35:12.229132  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:12.229188  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:12.233550  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:12.233611  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:12.263168  263844 cri.go:89] found id: ""
	I1212 00:35:12.263191  263844 logs.go:282] 0 containers: []
	W1212 00:35:12.263197  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:12.263203  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:12.263249  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:12.290993  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:12.291011  263844 cri.go:89] found id: ""
	I1212 00:35:12.291020  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:12.291082  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:12.294886  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:12.294950  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:12.324024  263844 cri.go:89] found id: ""
	I1212 00:35:12.324047  263844 logs.go:282] 0 containers: []
	W1212 00:35:12.324056  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:12.324064  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:12.324126  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:12.351428  263844 cri.go:89] found id: ""
	I1212 00:35:12.351452  263844 logs.go:282] 0 containers: []
	W1212 00:35:12.351462  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:12.351498  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:12.351516  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:12.385548  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:12.385577  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:12.416157  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:12.416182  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:12.442624  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:12.442646  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:12.502981  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:12.503012  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:12.532856  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:12.532883  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:12.653112  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:12.653143  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:12.668189  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:12.668212  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:12.724901  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:15.225542  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:15.225965  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:15.226024  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:15.226088  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:15.254653  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:15.254676  263844 cri.go:89] found id: ""
	I1212 00:35:15.254685  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:15.254771  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:15.260402  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:15.260491  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:15.305137  263844 cri.go:89] found id: ""
	I1212 00:35:15.305160  263844 logs.go:282] 0 containers: []
	W1212 00:35:15.305171  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:15.305178  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:15.305245  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:15.333505  263844 cri.go:89] found id: ""
	I1212 00:35:15.333538  263844 logs.go:282] 0 containers: []
	W1212 00:35:15.333552  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:15.333561  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:15.333614  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:15.361356  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:15.361380  263844 cri.go:89] found id: ""
	I1212 00:35:15.361389  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:15.361453  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:15.365357  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:15.365419  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:15.396683  263844 cri.go:89] found id: ""
	I1212 00:35:15.396704  263844 logs.go:282] 0 containers: []
	W1212 00:35:15.396711  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:15.396717  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:15.396773  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:15.429111  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:15.429152  263844 cri.go:89] found id: ""
	I1212 00:35:15.429163  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:15.429219  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:15.433130  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:15.433183  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:15.458694  263844 cri.go:89] found id: ""
	I1212 00:35:15.458722  263844 logs.go:282] 0 containers: []
	W1212 00:35:15.458732  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:15.458740  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:15.458801  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:15.502451  263844 cri.go:89] found id: ""
	I1212 00:35:15.502498  263844 logs.go:282] 0 containers: []
	W1212 00:35:15.502535  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:15.502550  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:15.502570  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:15.533938  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:15.533965  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:15.561307  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:15.561331  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:15.613786  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:15.613818  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:15.643661  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:15.643695  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:15.720562  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:15.720586  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:15.734392  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:15.734426  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:15.787130  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:15.787150  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:15.787162  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:17.280767  300250 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501711696s
	I1212 00:35:17.298890  300250 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:35:17.306798  300250 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:35:17.316204  300250 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:35:17.316535  300250 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-079970 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:35:17.324332  300250 kubeadm.go:319] [bootstrap-token] Using token: vzntbo.0n3hslrivx4nbk6h
	I1212 00:35:17.325581  300250 out.go:252]   - Configuring RBAC rules ...
	I1212 00:35:17.325727  300250 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:35:17.328649  300250 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:35:17.333384  300250 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:35:17.335765  300250 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:35:17.338603  300250 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:35:17.340971  300250 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:35:17.686781  300250 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:35:18.102084  300250 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 00:35:18.687720  300250 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 00:35:18.689043  300250 kubeadm.go:319] 
	I1212 00:35:18.689133  300250 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 00:35:18.689152  300250 kubeadm.go:319] 
	I1212 00:35:18.689249  300250 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 00:35:18.689270  300250 kubeadm.go:319] 
	I1212 00:35:18.689313  300250 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 00:35:18.689387  300250 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:35:18.689456  300250 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:35:18.689466  300250 kubeadm.go:319] 
	I1212 00:35:18.689568  300250 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 00:35:18.689580  300250 kubeadm.go:319] 
	I1212 00:35:18.689626  300250 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:35:18.689634  300250 kubeadm.go:319] 
	I1212 00:35:18.689709  300250 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 00:35:18.689831  300250 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:35:18.689940  300250 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:35:18.689950  300250 kubeadm.go:319] 
	I1212 00:35:18.690061  300250 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:35:18.690186  300250 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 00:35:18.690195  300250 kubeadm.go:319] 
	I1212 00:35:18.690314  300250 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token vzntbo.0n3hslrivx4nbk6h \
	I1212 00:35:18.690496  300250 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f \
	I1212 00:35:18.690545  300250 kubeadm.go:319] 	--control-plane 
	I1212 00:35:18.690563  300250 kubeadm.go:319] 
	I1212 00:35:18.690686  300250 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:35:18.690696  300250 kubeadm.go:319] 
	I1212 00:35:18.690812  300250 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token vzntbo.0n3hslrivx4nbk6h \
	I1212 00:35:18.690954  300250 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f 
	I1212 00:35:18.694339  300250 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 00:35:18.694550  300250 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:35:18.694567  300250 cni.go:84] Creating CNI manager for ""
	I1212 00:35:18.694576  300250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:18.696703  300250 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 00:35:17.160850  302677 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.596679207s
	I1212 00:35:19.066502  302677 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502324833s
	I1212 00:35:19.085057  302677 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:35:19.096897  302677 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:35:19.107177  302677 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:35:19.107466  302677 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-821472 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:35:19.115974  302677 kubeadm.go:319] [bootstrap-token] Using token: 1hw5s9.frvasufed6x8ofpi
	I1212 00:35:19.117834  302677 out.go:252]   - Configuring RBAC rules ...
	I1212 00:35:19.117975  302677 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:35:19.120835  302677 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:35:19.125865  302677 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:35:19.127972  302677 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:35:19.130202  302677 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:35:19.132498  302677 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:35:19.472755  302677 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:35:19.889183  302677 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 00:35:20.472283  302677 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 00:35:20.473228  302677 kubeadm.go:319] 
	I1212 00:35:20.473347  302677 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 00:35:20.473382  302677 kubeadm.go:319] 
	I1212 00:35:20.473514  302677 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 00:35:20.473529  302677 kubeadm.go:319] 
	I1212 00:35:20.473564  302677 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 00:35:20.473653  302677 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:35:20.473726  302677 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:35:20.473742  302677 kubeadm.go:319] 
	I1212 00:35:20.473830  302677 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 00:35:20.473840  302677 kubeadm.go:319] 
	I1212 00:35:20.473908  302677 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:35:20.473917  302677 kubeadm.go:319] 
	I1212 00:35:20.473990  302677 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 00:35:20.474112  302677 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:35:20.474211  302677 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:35:20.474220  302677 kubeadm.go:319] 
	I1212 00:35:20.474319  302677 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:35:20.474421  302677 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 00:35:20.474431  302677 kubeadm.go:319] 
	I1212 00:35:20.474566  302677 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1hw5s9.frvasufed6x8ofpi \
	I1212 00:35:20.474716  302677 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f \
	I1212 00:35:20.474777  302677 kubeadm.go:319] 	--control-plane 
	I1212 00:35:20.474785  302677 kubeadm.go:319] 
	I1212 00:35:20.474895  302677 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:35:20.474908  302677 kubeadm.go:319] 
	I1212 00:35:20.475015  302677 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1hw5s9.frvasufed6x8ofpi \
	I1212 00:35:20.475137  302677 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f 
	I1212 00:35:20.477381  302677 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 00:35:20.477537  302677 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:35:20.477568  302677 cni.go:84] Creating CNI manager for ""
	I1212 00:35:20.477580  302677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:20.479065  302677 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 00:35:18.698118  300250 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:35:18.702449  300250 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 00:35:18.702465  300250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 00:35:18.716877  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:35:18.969312  300250 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:35:18.969392  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:18.969443  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-079970 minikube.k8s.io/updated_at=2025_12_12T00_35_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=default-k8s-diff-port-079970 minikube.k8s.io/primary=true
	I1212 00:35:18.980253  300250 ops.go:34] apiserver oom_adj: -16
	I1212 00:35:19.044290  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:19.544400  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:20.044663  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:20.544656  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:20.480077  302677 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:35:20.484251  302677 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1212 00:35:20.484270  302677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 00:35:20.498227  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:35:20.722785  302677 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:35:20.722859  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:20.722863  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-821472 minikube.k8s.io/updated_at=2025_12_12T00_35_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=newest-cni-821472 minikube.k8s.io/primary=true
	I1212 00:35:20.792681  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:20.792694  302677 ops.go:34] apiserver oom_adj: -16
	I1212 00:35:21.293770  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:18.318161  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:18.318666  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:18.318729  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:18.318786  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:18.352544  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:18.352567  263844 cri.go:89] found id: ""
	I1212 00:35:18.352577  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:18.352636  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:18.357312  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:18.357378  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:18.386881  263844 cri.go:89] found id: ""
	I1212 00:35:18.386902  263844 logs.go:282] 0 containers: []
	W1212 00:35:18.386912  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:18.386919  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:18.386973  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:18.417064  263844 cri.go:89] found id: ""
	I1212 00:35:18.417088  263844 logs.go:282] 0 containers: []
	W1212 00:35:18.417099  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:18.417107  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:18.417167  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:18.447543  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:18.447568  263844 cri.go:89] found id: ""
	I1212 00:35:18.447578  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:18.447647  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:18.452360  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:18.452420  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:18.481882  263844 cri.go:89] found id: ""
	I1212 00:35:18.481911  263844 logs.go:282] 0 containers: []
	W1212 00:35:18.481923  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:18.481931  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:18.481984  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:18.522675  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:18.522698  263844 cri.go:89] found id: ""
	I1212 00:35:18.522707  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:18.522770  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:18.527549  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:18.527622  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:18.557639  263844 cri.go:89] found id: ""
	I1212 00:35:18.557663  263844 logs.go:282] 0 containers: []
	W1212 00:35:18.557673  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:18.557680  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:18.557741  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:18.587529  263844 cri.go:89] found id: ""
	I1212 00:35:18.587558  263844 logs.go:282] 0 containers: []
	W1212 00:35:18.587568  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:18.587582  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:18.587600  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:18.617911  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:18.617944  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:18.677434  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:18.677462  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:18.710192  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:18.710220  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:18.813216  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:18.813247  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:18.829843  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:18.829876  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:18.895288  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:18.895311  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:18.895326  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:18.928602  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:18.928632  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:21.468717  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:21.469075  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:21.469120  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:21.469168  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:21.494583  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:21.494599  263844 cri.go:89] found id: ""
	I1212 00:35:21.494607  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:21.494665  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:21.498756  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:21.498842  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:21.524669  263844 cri.go:89] found id: ""
	I1212 00:35:21.524694  263844 logs.go:282] 0 containers: []
	W1212 00:35:21.524704  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:21.524710  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:21.524752  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:21.550501  263844 cri.go:89] found id: ""
	I1212 00:35:21.550525  263844 logs.go:282] 0 containers: []
	W1212 00:35:21.550537  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:21.550544  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:21.550599  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:21.576757  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:21.576778  263844 cri.go:89] found id: ""
	I1212 00:35:21.576787  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:21.576840  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:21.580808  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:21.580869  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:21.610338  263844 cri.go:89] found id: ""
	I1212 00:35:21.610365  263844 logs.go:282] 0 containers: []
	W1212 00:35:21.610380  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:21.610387  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:21.610458  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:21.636138  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:21.636157  263844 cri.go:89] found id: ""
	I1212 00:35:21.636164  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:21.636218  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:21.640049  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:21.640100  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:21.665873  263844 cri.go:89] found id: ""
	I1212 00:35:21.665897  263844 logs.go:282] 0 containers: []
	W1212 00:35:21.665905  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:21.665913  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:21.665980  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:21.692021  263844 cri.go:89] found id: ""
	I1212 00:35:21.692046  263844 logs.go:282] 0 containers: []
	W1212 00:35:21.692057  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:21.692068  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:21.692082  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:21.783812  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:21.783843  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:21.799265  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:21.799297  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:21.858012  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:21.858034  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:21.858055  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:21.888680  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:21.888707  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:21.914847  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:21.914873  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:21.939973  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:21.939996  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:21.993411  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:21.993438  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Dec 12 00:34:42 embed-certs-858659 crio[569]: time="2025-12-12T00:34:42.260114195Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 00:34:42 embed-certs-858659 crio[569]: time="2025-12-12T00:34:42.26344945Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 00:34:42 embed-certs-858659 crio[569]: time="2025-12-12T00:34:42.263469038Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.486742292Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=73ca98c1-b646-4f43-b127-e5df985cbe34 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.489994689Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4af99606-83b0-4a21-9b58-203d1f91c6bb name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.493923025Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm/dashboard-metrics-scraper" id=af31a627-410c-4532-a4d0-c21db5542dde name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.494115848Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.504155832Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.505087114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.54103308Z" level=info msg="Created container 7d19b30263df4363fbd74e18b1a88cb116c9d902be0ebb3b0098d7308ab71d50: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm/dashboard-metrics-scraper" id=af31a627-410c-4532-a4d0-c21db5542dde name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.54189676Z" level=info msg="Starting container: 7d19b30263df4363fbd74e18b1a88cb116c9d902be0ebb3b0098d7308ab71d50" id=6660415d-bfca-44a5-8ec0-f61809ed07e6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.543976036Z" level=info msg="Started container" PID=1760 containerID=7d19b30263df4363fbd74e18b1a88cb116c9d902be0ebb3b0098d7308ab71d50 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm/dashboard-metrics-scraper id=6660415d-bfca-44a5-8ec0-f61809ed07e6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7600ad226ffddfbd7d0d21211f90fc33c01c657f0743ec2baec27e209bbfdb0a
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.592694345Z" level=info msg="Removing container: 81f4be7d18cc2451154ee17fd0ddac97db4eecb8e770e25398c1782f33cccd5a" id=6300f71a-e44a-47b5-9c9d-0c540bfdeb9b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.603676874Z" level=info msg="Removed container 81f4be7d18cc2451154ee17fd0ddac97db4eecb8e770e25398c1782f33cccd5a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm/dashboard-metrics-scraper" id=6300f71a-e44a-47b5-9c9d-0c540bfdeb9b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.611133138Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=980d6bd8-e5f2-4d16-a626-3f5a12b76287 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.612192795Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c162c074-0482-4522-b488-d0d245a4f30e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.61453923Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a968f8d5-2475-4d58-83c6-1d73a2967ad0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.614674543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.618973785Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.619139312Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/57f00149134251d7f24f5fe16a0eef6bbec934bff7e247a8b7ce0c76b0ad6142/merged/etc/passwd: no such file or directory"
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.619166785Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/57f00149134251d7f24f5fe16a0eef6bbec934bff7e247a8b7ce0c76b0ad6142/merged/etc/group: no such file or directory"
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.620413948Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.653558593Z" level=info msg="Created container 5d164991478d239f54b5a6f53f3bcbddadcbc4baf2cd934eea9b27ca5f2ea741: kube-system/storage-provisioner/storage-provisioner" id=a968f8d5-2475-4d58-83c6-1d73a2967ad0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.654140099Z" level=info msg="Starting container: 5d164991478d239f54b5a6f53f3bcbddadcbc4baf2cd934eea9b27ca5f2ea741" id=19d95e9c-f9cd-4b0d-9c8f-5e22b7caa94c name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.655868671Z" level=info msg="Started container" PID=1777 containerID=5d164991478d239f54b5a6f53f3bcbddadcbc4baf2cd934eea9b27ca5f2ea741 description=kube-system/storage-provisioner/storage-provisioner id=19d95e9c-f9cd-4b0d-9c8f-5e22b7caa94c name=/runtime.v1.RuntimeService/StartContainer sandboxID=ea8821dfddcd916891f5107961fd4975e0c85bbe07877cf3f7aed9a973ed9b02
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5d164991478d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   ea8821dfddcd9       storage-provisioner                          kube-system
	7d19b30263df4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   7600ad226ffdd       dashboard-metrics-scraper-6ffb444bf9-52czm   kubernetes-dashboard
	70c68a4579255       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   0e3e8604a746a       kubernetes-dashboard-855c9754f9-4fw4k        kubernetes-dashboard
	ca1dd79202ff4       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   1b93e5e35c341       busybox                                      default
	7c809ac147187       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   4d038ff0c6484       coredns-66bc5c9577-8x66p                     kube-system
	3c9dcfc0a39b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   ea8821dfddcd9       storage-provisioner                          kube-system
	06a70ae8e015e       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           51 seconds ago      Running             kube-proxy                  0                   e9b88dfe031a4       kube-proxy-httpr                             kube-system
	0427b9f7a3afc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   06ac5092b2f13       kindnet-9jvdg                                kube-system
	6eb73517ebee8       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           54 seconds ago      Running             kube-controller-manager     0                   757495dd1b037       kube-controller-manager-embed-certs-858659   kube-system
	07f4a35d8d4d1       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           54 seconds ago      Running             kube-scheduler              0                   511cd44010edd       kube-scheduler-embed-certs-858659            kube-system
	a4dfa21dd3b08       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           54 seconds ago      Running             kube-apiserver              0                   a1b2f474a9233       kube-apiserver-embed-certs-858659            kube-system
	3da8c9c634a05       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   59006ad0a99b6       etcd-embed-certs-858659                      kube-system
	
	
	==> coredns [7c809ac147187b6cddcd652c01f5ca9264456d90ed5b50ad626d6a454686cea5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35018 - 23966 "HINFO IN 7819491733879595867.4297871292551262055. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.120876135s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-858659
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-858659
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=embed-certs-858659
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_33_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:33:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-858659
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:35:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:35:11 +0000   Fri, 12 Dec 2025 00:33:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:35:11 +0000   Fri, 12 Dec 2025 00:33:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:35:11 +0000   Fri, 12 Dec 2025 00:33:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:35:11 +0000   Fri, 12 Dec 2025 00:34:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-858659
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                116d1391-d680-420b-9323-ddc7dc668b8a
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-66bc5c9577-8x66p                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-embed-certs-858659                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         107s
	  kube-system                 kindnet-9jvdg                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      101s
	  kube-system                 kube-apiserver-embed-certs-858659             250m (3%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-embed-certs-858659    200m (2%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-httpr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-embed-certs-858659             100m (1%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-52czm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4fw4k         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  112s (x8 over 112s)  kubelet          Node embed-certs-858659 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s (x8 over 112s)  kubelet          Node embed-certs-858659 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s (x8 over 112s)  kubelet          Node embed-certs-858659 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    107s                 kubelet          Node embed-certs-858659 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  107s                 kubelet          Node embed-certs-858659 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     107s                 kubelet          Node embed-certs-858659 status is now: NodeHasSufficientPID
	  Normal  Starting                 107s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           102s                 node-controller  Node embed-certs-858659 event: Registered Node embed-certs-858659 in Controller
	  Normal  NodeReady                90s                  kubelet          Node embed-certs-858659 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node embed-certs-858659 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node embed-certs-858659 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node embed-certs-858659 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                  node-controller  Node embed-certs-858659 event: Registered Node embed-certs-858659 in Controller
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [3da8c9c634a0528028b8a0875544b6a0c41e72dc8bf1ff1da95beccf80094376] <==
	{"level":"warn","ts":"2025-12-12T00:34:30.154501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.165652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.172974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.180865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.187180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.193090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.199375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.205697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.212120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.222654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.231259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.242349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.250530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.258396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.265384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.272882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.280745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.289209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.295280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.302672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.309925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.327242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.335071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.342250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.389741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34088","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:35:23 up  1:17,  0 user,  load average: 5.33, 3.41, 2.08
	Linux embed-certs-858659 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0427b9f7a3afccc4063d5b86ecc7f448e0192c4c5812428503d97bac33bfe9cd] <==
	I1212 00:34:32.039180       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 00:34:32.039438       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1212 00:34:32.039628       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:34:32.039649       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:34:32.039670       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:34:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:34:32.328599       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:34:32.328685       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:34:32.328699       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:34:32.328830       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 00:34:32.729057       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:34:32.729091       1 metrics.go:72] Registering metrics
	I1212 00:34:32.729176       1 controller.go:711] "Syncing nftables rules"
	I1212 00:34:42.245940       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 00:34:42.245990       1 main.go:301] handling current node
	I1212 00:34:52.246791       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 00:34:52.246837       1 main.go:301] handling current node
	I1212 00:35:02.246842       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 00:35:02.246887       1 main.go:301] handling current node
	I1212 00:35:12.246996       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 00:35:12.247039       1 main.go:301] handling current node
	I1212 00:35:22.254561       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 00:35:22.254594       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a4dfa21dd3b082f3a9464428da14520464aa97d1551c9a8ddffbf56d063877a6] <==
	I1212 00:34:30.876895       1 aggregator.go:171] initial CRD sync complete...
	I1212 00:34:30.876907       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 00:34:30.876914       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 00:34:30.876920       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:34:30.876965       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1212 00:34:30.876997       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 00:34:30.877013       1 policy_source.go:240] refreshing policies
	I1212 00:34:30.877082       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 00:34:30.877121       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 00:34:30.877891       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 00:34:30.884332       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1212 00:34:30.885237       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 00:34:30.892241       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:34:30.904246       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 00:34:31.217739       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 00:34:31.246919       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:34:31.266164       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:34:31.273913       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:34:31.281770       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:34:31.315930       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.172.208"}
	I1212 00:34:31.326794       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.35.38"}
	I1212 00:34:31.781969       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 00:34:33.750094       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:34:33.851798       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 00:34:34.001462       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6eb73517ebee81333693231ad0296e35a758eeb5aa9cc45c7ee7663ea6e652e4] <==
	I1212 00:34:33.339307       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 00:34:33.342585       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 00:34:33.343945       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1212 00:34:33.345943       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1212 00:34:33.346186       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1212 00:34:33.347115       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 00:34:33.347149       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1212 00:34:33.347209       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 00:34:33.347304       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 00:34:33.347415       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-858659"
	I1212 00:34:33.347456       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1212 00:34:33.347528       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 00:34:33.347565       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 00:34:33.347962       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 00:34:33.348307       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1212 00:34:33.348462       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 00:34:33.349971       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 00:34:33.350066       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 00:34:33.354306       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 00:34:33.354351       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 00:34:33.366204       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1212 00:34:33.447231       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 00:34:33.447248       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 00:34:33.447255       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 00:34:33.467005       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [06a70ae8e015e6ed73dee2a76a938f3ac6d6569ec98202ecea145fbd5fdd2e6e] <==
	I1212 00:34:31.922771       1 server_linux.go:53] "Using iptables proxy"
	I1212 00:34:31.989779       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 00:34:32.090652       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 00:34:32.090733       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1212 00:34:32.090846       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:34:32.114692       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:34:32.114762       1 server_linux.go:132] "Using iptables Proxier"
	I1212 00:34:32.121168       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:34:32.121628       1 server.go:527] "Version info" version="v1.34.2"
	I1212 00:34:32.121660       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:34:32.124869       1 config.go:200] "Starting service config controller"
	I1212 00:34:32.124893       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:34:32.124911       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:34:32.124916       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:34:32.124929       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:34:32.124935       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:34:32.125622       1 config.go:309] "Starting node config controller"
	I1212 00:34:32.125792       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:34:32.225214       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 00:34:32.225230       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 00:34:32.225230       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 00:34:32.225966       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [07f4a35d8d4d1bc19d0c0dc6b015f381bc482dc379a9a416e57528498aa8e1e5] <==
	I1212 00:34:29.815614       1 serving.go:386] Generated self-signed cert in-memory
	W1212 00:34:30.815462       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 00:34:30.815535       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 00:34:30.815551       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 00:34:30.815560       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 00:34:30.856300       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1212 00:34:30.856407       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:34:30.859303       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:34:30.859390       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:34:30.860416       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 00:34:30.860502       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 00:34:30.960445       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 00:34:34 embed-certs-858659 kubelet[734]: I1212 00:34:34.036948     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqhpc\" (UniqueName: \"kubernetes.io/projected/f2596c32-24e3-46ba-946a-60b89b5e73dc-kube-api-access-qqhpc\") pod \"kubernetes-dashboard-855c9754f9-4fw4k\" (UID: \"f2596c32-24e3-46ba-946a-60b89b5e73dc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4fw4k"
	Dec 12 00:34:34 embed-certs-858659 kubelet[734]: I1212 00:34:34.037000     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f2596c32-24e3-46ba-946a-60b89b5e73dc-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-4fw4k\" (UID: \"f2596c32-24e3-46ba-946a-60b89b5e73dc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4fw4k"
	Dec 12 00:34:36 embed-certs-858659 kubelet[734]: I1212 00:34:36.820666     734 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 12 00:34:37 embed-certs-858659 kubelet[734]: I1212 00:34:37.534120     734 scope.go:117] "RemoveContainer" containerID="507e33e6e4a97a9696d02408549febf4eb7070392e528ae4fb9562613b1b3760"
	Dec 12 00:34:38 embed-certs-858659 kubelet[734]: I1212 00:34:38.540625     734 scope.go:117] "RemoveContainer" containerID="507e33e6e4a97a9696d02408549febf4eb7070392e528ae4fb9562613b1b3760"
	Dec 12 00:34:38 embed-certs-858659 kubelet[734]: I1212 00:34:38.540963     734 scope.go:117] "RemoveContainer" containerID="81f4be7d18cc2451154ee17fd0ddac97db4eecb8e770e25398c1782f33cccd5a"
	Dec 12 00:34:38 embed-certs-858659 kubelet[734]: E1212 00:34:38.541152     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-52czm_kubernetes-dashboard(0b7097d6-9dda-4e0d-8310-995b885116d7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm" podUID="0b7097d6-9dda-4e0d-8310-995b885116d7"
	Dec 12 00:34:39 embed-certs-858659 kubelet[734]: I1212 00:34:39.546465     734 scope.go:117] "RemoveContainer" containerID="81f4be7d18cc2451154ee17fd0ddac97db4eecb8e770e25398c1782f33cccd5a"
	Dec 12 00:34:39 embed-certs-858659 kubelet[734]: E1212 00:34:39.546726     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-52czm_kubernetes-dashboard(0b7097d6-9dda-4e0d-8310-995b885116d7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm" podUID="0b7097d6-9dda-4e0d-8310-995b885116d7"
	Dec 12 00:34:42 embed-certs-858659 kubelet[734]: I1212 00:34:42.563613     734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4fw4k" podStartSLOduration=2.27918682 podStartE2EDuration="9.563589117s" podCreationTimestamp="2025-12-12 00:34:33 +0000 UTC" firstStartedPulling="2025-12-12 00:34:34.263037667 +0000 UTC m=+5.861040122" lastFinishedPulling="2025-12-12 00:34:41.547439954 +0000 UTC m=+13.145442419" observedRunningTime="2025-12-12 00:34:42.563318238 +0000 UTC m=+14.161320706" watchObservedRunningTime="2025-12-12 00:34:42.563589117 +0000 UTC m=+14.161591588"
	Dec 12 00:34:43 embed-certs-858659 kubelet[734]: I1212 00:34:43.201361     734 scope.go:117] "RemoveContainer" containerID="81f4be7d18cc2451154ee17fd0ddac97db4eecb8e770e25398c1782f33cccd5a"
	Dec 12 00:34:43 embed-certs-858659 kubelet[734]: E1212 00:34:43.201554     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-52czm_kubernetes-dashboard(0b7097d6-9dda-4e0d-8310-995b885116d7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm" podUID="0b7097d6-9dda-4e0d-8310-995b885116d7"
	Dec 12 00:34:56 embed-certs-858659 kubelet[734]: I1212 00:34:56.485298     734 scope.go:117] "RemoveContainer" containerID="81f4be7d18cc2451154ee17fd0ddac97db4eecb8e770e25398c1782f33cccd5a"
	Dec 12 00:34:56 embed-certs-858659 kubelet[734]: I1212 00:34:56.591311     734 scope.go:117] "RemoveContainer" containerID="81f4be7d18cc2451154ee17fd0ddac97db4eecb8e770e25398c1782f33cccd5a"
	Dec 12 00:34:56 embed-certs-858659 kubelet[734]: I1212 00:34:56.592020     734 scope.go:117] "RemoveContainer" containerID="7d19b30263df4363fbd74e18b1a88cb116c9d902be0ebb3b0098d7308ab71d50"
	Dec 12 00:34:56 embed-certs-858659 kubelet[734]: E1212 00:34:56.592330     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-52czm_kubernetes-dashboard(0b7097d6-9dda-4e0d-8310-995b885116d7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm" podUID="0b7097d6-9dda-4e0d-8310-995b885116d7"
	Dec 12 00:35:02 embed-certs-858659 kubelet[734]: I1212 00:35:02.610673     734 scope.go:117] "RemoveContainer" containerID="3c9dcfc0a39b026ebae759fa5108c0241e5b83a83111d1a9c78bde6446593eed"
	Dec 12 00:35:03 embed-certs-858659 kubelet[734]: I1212 00:35:03.201789     734 scope.go:117] "RemoveContainer" containerID="7d19b30263df4363fbd74e18b1a88cb116c9d902be0ebb3b0098d7308ab71d50"
	Dec 12 00:35:03 embed-certs-858659 kubelet[734]: E1212 00:35:03.202018     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-52czm_kubernetes-dashboard(0b7097d6-9dda-4e0d-8310-995b885116d7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm" podUID="0b7097d6-9dda-4e0d-8310-995b885116d7"
	Dec 12 00:35:13 embed-certs-858659 kubelet[734]: I1212 00:35:13.485245     734 scope.go:117] "RemoveContainer" containerID="7d19b30263df4363fbd74e18b1a88cb116c9d902be0ebb3b0098d7308ab71d50"
	Dec 12 00:35:13 embed-certs-858659 kubelet[734]: E1212 00:35:13.485419     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-52czm_kubernetes-dashboard(0b7097d6-9dda-4e0d-8310-995b885116d7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm" podUID="0b7097d6-9dda-4e0d-8310-995b885116d7"
	Dec 12 00:35:20 embed-certs-858659 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 00:35:20 embed-certs-858659 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 00:35:20 embed-certs-858659 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:20 embed-certs-858659 systemd[1]: kubelet.service: Consumed 1.594s CPU time.
	
	
	==> kubernetes-dashboard [70c68a457925502a9aee2ba9aecc60dbf1e189971126f7728a2e5d3dad2af8c7] <==
	2025/12/12 00:34:41 Starting overwatch
	2025/12/12 00:34:41 Using namespace: kubernetes-dashboard
	2025/12/12 00:34:41 Using in-cluster config to connect to apiserver
	2025/12/12 00:34:41 Using secret token for csrf signing
	2025/12/12 00:34:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/12 00:34:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/12 00:34:41 Successful initial request to the apiserver, version: v1.34.2
	2025/12/12 00:34:41 Generating JWE encryption key
	2025/12/12 00:34:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/12 00:34:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/12 00:34:41 Initializing JWE encryption key from synchronized object
	2025/12/12 00:34:41 Creating in-cluster Sidecar client
	2025/12/12 00:34:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 00:34:41 Serving insecurely on HTTP port: 9090
	2025/12/12 00:35:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3c9dcfc0a39b026ebae759fa5108c0241e5b83a83111d1a9c78bde6446593eed] <==
	I1212 00:34:31.879311       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 00:35:01.884527       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5d164991478d239f54b5a6f53f3bcbddadcbc4baf2cd934eea9b27ca5f2ea741] <==
	I1212 00:35:02.667946       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:35:02.675840       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:35:02.675875       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1212 00:35:02.678103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:06.133389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:10.393970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:13.992491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:17.047722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:20.069630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:20.075118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 00:35:20.075289       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:35:20.075450       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-858659_d6aa2683-35ae-49e4-bebf-421db8cefbff!
	I1212 00:35:20.075449       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"78ef122f-55d8-421e-a9ec-895d80aa214b", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-858659_d6aa2683-35ae-49e4-bebf-421db8cefbff became leader
	W1212 00:35:20.077604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:20.080985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 00:35:20.175691       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-858659_d6aa2683-35ae-49e4-bebf-421db8cefbff!
	W1212 00:35:22.087919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:22.100201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-858659 -n embed-certs-858659
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-858659 -n embed-certs-858659: exit status 2 (377.601411ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-858659 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-858659
helpers_test.go:244: (dbg) docker inspect embed-certs-858659:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705",
	        "Created": "2025-12-12T00:33:20.451187787Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 292428,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:34:22.437163096Z",
	            "FinishedAt": "2025-12-12T00:34:21.472806865Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705/hostname",
	        "HostsPath": "/var/lib/docker/containers/feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705/hosts",
	        "LogPath": "/var/lib/docker/containers/feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705/feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705-json.log",
	        "Name": "/embed-certs-858659",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-858659:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-858659",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "feaf39a3749e40570a9ecd33c6ca6bc86cf6a8ffb76e14d91ab69901fbf2d705",
	                "LowerDir": "/var/lib/docker/overlay2/7f6ddba1e54d34c41adb7c12d95ba48c15153bae18741ec1ec818d27555aa5e9-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f6ddba1e54d34c41adb7c12d95ba48c15153bae18741ec1ec818d27555aa5e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f6ddba1e54d34c41adb7c12d95ba48c15153bae18741ec1ec818d27555aa5e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f6ddba1e54d34c41adb7c12d95ba48c15153bae18741ec1ec818d27555aa5e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-858659",
	                "Source": "/var/lib/docker/volumes/embed-certs-858659/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-858659",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-858659",
	                "name.minikube.sigs.k8s.io": "embed-certs-858659",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9177ca98739f469da2b7ee8e7cb53794ae2e804a71bdc85b19de9b0066d541b1",
	            "SandboxKey": "/var/run/docker/netns/9177ca98739f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-858659": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d0c60cc9085547f771925e29c2f723fccc6f964f64d3d64910bb19e85e09e545",
	                    "EndpointID": "49d5fd8b1aa19317bbcb648ace278a24c31dbd8c0d280910efe99d37b9f06433",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "b2:b3:21:ac:19:7a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-858659",
	                        "feaf39a3749e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-858659 -n embed-certs-858659
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-858659 -n embed-certs-858659: exit status 2 (347.506539ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-858659 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-858659 logs -n 25: (1.295879266s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p old-k8s-version-743506 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ stop    │ -p old-k8s-version-743506 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable metrics-server -p no-preload-675290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ stop    │ -p no-preload-675290 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-858659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ stop    │ -p embed-certs-858659 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-743506 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p no-preload-675290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-858659 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:35 UTC │
	│ image   │ old-k8s-version-743506 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p old-k8s-version-743506 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p old-k8s-version-743506                                                                                                                                                                                                                            │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ image   │ no-preload-675290 image list --format=json                                                                                                                                                                                                           │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p no-preload-675290 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p old-k8s-version-743506                                                                                                                                                                                                                            │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ delete  │ -p disable-driver-mounts-039387                                                                                                                                                                                                                      │ disable-driver-mounts-039387 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p default-k8s-diff-port-079970 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079970 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p no-preload-675290                                                                                                                                                                                                                                 │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:35 UTC │
	│ delete  │ -p no-preload-675290                                                                                                                                                                                                                                 │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ start   │ -p newest-cni-821472 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ image   │ embed-certs-858659 image list --format=json                                                                                                                                                                                                          │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ pause   │ -p embed-certs-858659 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:35:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:35:01.676845  302677 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:01.676993  302677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:01.677000  302677 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:01.677007  302677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:01.677289  302677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:35:01.677788  302677 out.go:368] Setting JSON to false
	I1212 00:35:01.679324  302677 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4648,"bootTime":1765495054,"procs":422,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:35:01.679417  302677 start.go:143] virtualization: kvm guest
	I1212 00:35:01.681915  302677 out.go:179] * [newest-cni-821472] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:35:01.683469  302677 notify.go:221] Checking for updates...
	I1212 00:35:01.684214  302677 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:35:01.685594  302677 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:35:01.687066  302677 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:35:01.689936  302677 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:35:01.694337  302677 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:35:01.696522  302677 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:35:01.698402  302677 config.go:182] Loaded profile config "default-k8s-diff-port-079970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:35:01.698569  302677 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:35:01.698687  302677 config.go:182] Loaded profile config "kubernetes-upgrade-605797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:01.698885  302677 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:35:01.726288  302677 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:35:01.726426  302677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:01.811670  302677 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-12 00:35:01.799849378 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:35:01.811833  302677 docker.go:319] overlay module found
	I1212 00:35:01.813434  302677 out.go:179] * Using the docker driver based on user configuration
	I1212 00:35:01.814701  302677 start.go:309] selected driver: docker
	I1212 00:35:01.814716  302677 start.go:927] validating driver "docker" against <nil>
	I1212 00:35:01.814728  302677 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:35:01.815393  302677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:01.879143  302677 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-12 00:35:01.868556648 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:35:01.879427  302677 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1212 00:35:01.879507  302677 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1212 00:35:01.879785  302677 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 00:35:01.882050  302677 out.go:179] * Using Docker driver with root privileges
	I1212 00:35:01.883265  302677 cni.go:84] Creating CNI manager for ""
	I1212 00:35:01.883332  302677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:01.883343  302677 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:35:01.883409  302677 start.go:353] cluster config:
	{Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:01.885115  302677 out.go:179] * Starting "newest-cni-821472" primary control-plane node in "newest-cni-821472" cluster
	I1212 00:35:01.886758  302677 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:35:01.888146  302677 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:35:01.889263  302677 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:01.889310  302677 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1212 00:35:01.889328  302677 cache.go:65] Caching tarball of preloaded images
	I1212 00:35:01.889363  302677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:35:01.889427  302677 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:35:01.889449  302677 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 00:35:01.889596  302677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/config.json ...
	I1212 00:35:01.889624  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/config.json: {Name:mk8e6ad7ce238dbea537fa1dd3602a56e56a71c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:01.912683  302677 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:35:01.912708  302677 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:35:01.912721  302677 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:35:01.912755  302677 start.go:360] acquireMachinesLock for newest-cni-821472: {Name:mk1920b4afd40f764aad092389429d0db04875a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:35:01.912844  302677 start.go:364] duration metric: took 68.015µs to acquireMachinesLock for "newest-cni-821472"
	I1212 00:35:01.912868  302677 start.go:93] Provisioning new machine with config: &{Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:35:01.912974  302677 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:34:57.061451  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:57.061496  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:57.078642  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:57.078672  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:57.147934  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:57.147960  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:57.147976  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:57.187226  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:57.187259  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:57.217714  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:34:57.217752  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:57.247042  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:57.247073  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:57.306744  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:57.306777  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:59.839534  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:59.839948  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:34:59.839995  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:59.840045  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:59.865677  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:59.865695  263844 cri.go:89] found id: ""
	I1212 00:34:59.865702  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:34:59.865745  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:59.869496  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:59.869563  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:59.894925  263844 cri.go:89] found id: ""
	I1212 00:34:59.894952  263844 logs.go:282] 0 containers: []
	W1212 00:34:59.894961  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:59.894969  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:59.895019  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:59.919987  263844 cri.go:89] found id: ""
	I1212 00:34:59.920013  263844 logs.go:282] 0 containers: []
	W1212 00:34:59.920027  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:59.920035  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:59.920089  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:59.949410  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:59.949436  263844 cri.go:89] found id: ""
	I1212 00:34:59.949445  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:59.949527  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:59.953413  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:59.953491  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:59.981859  263844 cri.go:89] found id: ""
	I1212 00:34:59.981886  263844 logs.go:282] 0 containers: []
	W1212 00:34:59.981897  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:59.981905  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:59.981958  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:00.014039  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:00.014064  263844 cri.go:89] found id: ""
	I1212 00:35:00.014093  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:00.014157  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:00.019033  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:00.019101  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:00.047071  263844 cri.go:89] found id: ""
	I1212 00:35:00.047098  263844 logs.go:282] 0 containers: []
	W1212 00:35:00.047110  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:00.047132  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:00.047182  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:00.074181  263844 cri.go:89] found id: ""
	I1212 00:35:00.074200  263844 logs.go:282] 0 containers: []
	W1212 00:35:00.074214  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:00.074222  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:00.074234  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:00.151605  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:00.151642  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:00.166946  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:00.166974  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:00.225208  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:00.225231  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:00.225245  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:00.258713  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:00.258739  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:00.286626  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:00.286654  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:00.311078  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:00.311101  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:00.366630  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:00.366659  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 00:34:57.522454  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:35:00.022987  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	I1212 00:35:01.428421  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Running}}
	I1212 00:35:01.447336  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Status}}
	I1212 00:35:01.468796  300250 cli_runner.go:164] Run: docker exec default-k8s-diff-port-079970 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:35:01.515960  300250 oci.go:144] the created container "default-k8s-diff-port-079970" has a running status.
	I1212 00:35:01.515989  300250 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa...
	I1212 00:35:01.609883  300250 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:35:01.636104  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Status}}
	I1212 00:35:01.657225  300250 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:35:01.657249  300250 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-079970 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:35:01.719543  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Status}}
	I1212 00:35:01.742574  300250 machine.go:94] provisionDockerMachine start ...
	I1212 00:35:01.742665  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:01.772749  300250 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:01.773093  300250 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1212 00:35:01.773112  300250 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:35:01.775590  300250 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58832->127.0.0.1:33088: read: connection reset by peer
	I1212 00:35:04.907004  300250 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-079970
	
	I1212 00:35:04.907029  300250 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-079970"
	I1212 00:35:04.907083  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:04.925088  300250 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:04.925306  300250 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1212 00:35:04.925325  300250 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-079970 && echo "default-k8s-diff-port-079970" | sudo tee /etc/hostname
	I1212 00:35:05.104218  300250 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-079970
	
	I1212 00:35:05.104321  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:05.122538  300250 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:05.122808  300250 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1212 00:35:05.122836  300250 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-079970' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-079970/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-079970' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:35:05.254825  300250 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:35:05.254864  300250 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:35:05.254892  300250 ubuntu.go:190] setting up certificates
	I1212 00:35:05.254908  300250 provision.go:84] configureAuth start
	I1212 00:35:05.254998  300250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-079970
	I1212 00:35:05.273295  300250 provision.go:143] copyHostCerts
	I1212 00:35:05.273350  300250 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:35:05.273360  300250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:35:05.273417  300250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:35:05.273528  300250 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:35:05.273537  300250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:35:05.273566  300250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:35:05.273626  300250 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:35:05.273633  300250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:35:05.273656  300250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:35:05.273705  300250 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-079970 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-079970 localhost minikube]
	I1212 00:35:05.296916  300250 provision.go:177] copyRemoteCerts
	I1212 00:35:05.296964  300250 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:35:05.296999  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:05.315032  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:05.408808  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:35:05.456913  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1212 00:35:05.473342  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:35:05.490425  300250 provision.go:87] duration metric: took 235.494662ms to configureAuth
	I1212 00:35:05.490445  300250 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:35:05.490622  300250 config.go:182] Loaded profile config "default-k8s-diff-port-079970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:35:05.490727  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:05.507963  300250 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:05.508252  300250 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1212 00:35:05.508277  300250 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:35:01.914973  302677 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 00:35:01.915193  302677 start.go:159] libmachine.API.Create for "newest-cni-821472" (driver="docker")
	I1212 00:35:01.915221  302677 client.go:173] LocalClient.Create starting
	I1212 00:35:01.915286  302677 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem
	I1212 00:35:01.915328  302677 main.go:143] libmachine: Decoding PEM data...
	I1212 00:35:01.915347  302677 main.go:143] libmachine: Parsing certificate...
	I1212 00:35:01.915411  302677 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem
	I1212 00:35:01.915433  302677 main.go:143] libmachine: Decoding PEM data...
	I1212 00:35:01.915443  302677 main.go:143] libmachine: Parsing certificate...
	I1212 00:35:01.915761  302677 cli_runner.go:164] Run: docker network inspect newest-cni-821472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:35:01.931948  302677 cli_runner.go:211] docker network inspect newest-cni-821472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:35:01.932008  302677 network_create.go:284] running [docker network inspect newest-cni-821472] to gather additional debugging logs...
	I1212 00:35:01.932032  302677 cli_runner.go:164] Run: docker network inspect newest-cni-821472
	W1212 00:35:01.950167  302677 cli_runner.go:211] docker network inspect newest-cni-821472 returned with exit code 1
	I1212 00:35:01.950190  302677 network_create.go:287] error running [docker network inspect newest-cni-821472]: docker network inspect newest-cni-821472: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-821472 not found
	I1212 00:35:01.950205  302677 network_create.go:289] output of [docker network inspect newest-cni-821472]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-821472 not found
	
	** /stderr **
	I1212 00:35:01.950338  302677 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:35:01.968471  302677 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ad72354fdcf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:1e:a7:00:22:62} reservation:<nil>}
	I1212 00:35:01.969916  302677 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-18bf2e8051c8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:8e:8f:a6:9d:0c} reservation:<nil>}
	I1212 00:35:01.970685  302677 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e5251ddf0a35 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:5a:cf:fd:1d:2a} reservation:<nil>}
	I1212 00:35:01.971365  302677 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b830d0}
	I1212 00:35:01.971387  302677 network_create.go:124] attempt to create docker network newest-cni-821472 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1212 00:35:01.971442  302677 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-821472 newest-cni-821472
	I1212 00:35:02.021201  302677 network_create.go:108] docker network newest-cni-821472 192.168.76.0/24 created
	I1212 00:35:02.021230  302677 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-821472" container
	I1212 00:35:02.021299  302677 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:35:02.039525  302677 cli_runner.go:164] Run: docker volume create newest-cni-821472 --label name.minikube.sigs.k8s.io=newest-cni-821472 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:35:02.058234  302677 oci.go:103] Successfully created a docker volume newest-cni-821472
	I1212 00:35:02.058323  302677 cli_runner.go:164] Run: docker run --rm --name newest-cni-821472-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-821472 --entrypoint /usr/bin/test -v newest-cni-821472:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 00:35:02.440426  302677 oci.go:107] Successfully prepared a docker volume newest-cni-821472
	I1212 00:35:02.440502  302677 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:02.440515  302677 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:35:02.440587  302677 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-821472:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:35:06.233731  302677 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-821472:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.793098349s)
	I1212 00:35:06.233764  302677 kic.go:203] duration metric: took 3.793245572s to extract preloaded images to volume ...
	W1212 00:35:06.233853  302677 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 00:35:06.233893  302677 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 00:35:06.233935  302677 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:35:06.298519  302677 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-821472 --name newest-cni-821472 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-821472 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-821472 --network newest-cni-821472 --ip 192.168.76.2 --volume newest-cni-821472:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 00:35:06.590905  302677 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Running}}
	I1212 00:35:06.609781  302677 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:06.629292  302677 cli_runner.go:164] Run: docker exec newest-cni-821472 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:35:02.896994  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:02.897634  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:02.897694  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:02.897758  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:02.924538  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:02.924572  263844 cri.go:89] found id: ""
	I1212 00:35:02.924582  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:02.924639  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:02.928500  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:02.928556  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:02.957640  263844 cri.go:89] found id: ""
	I1212 00:35:02.957663  263844 logs.go:282] 0 containers: []
	W1212 00:35:02.957675  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:02.957682  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:02.957749  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:02.984525  263844 cri.go:89] found id: ""
	I1212 00:35:02.984548  263844 logs.go:282] 0 containers: []
	W1212 00:35:02.984558  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:02.984566  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:02.984634  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:03.010719  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:03.010742  263844 cri.go:89] found id: ""
	I1212 00:35:03.010751  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:03.010804  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:03.014657  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:03.014720  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:03.040940  263844 cri.go:89] found id: ""
	I1212 00:35:03.040963  263844 logs.go:282] 0 containers: []
	W1212 00:35:03.040973  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:03.040980  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:03.041037  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:03.067543  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:03.067569  263844 cri.go:89] found id: ""
	I1212 00:35:03.067580  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:03.067641  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:03.071653  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:03.071720  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:03.099943  263844 cri.go:89] found id: ""
	I1212 00:35:03.099969  263844 logs.go:282] 0 containers: []
	W1212 00:35:03.099980  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:03.099988  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:03.100045  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:03.127350  263844 cri.go:89] found id: ""
	I1212 00:35:03.127376  263844 logs.go:282] 0 containers: []
	W1212 00:35:03.127388  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:03.127400  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:03.127416  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:03.208587  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:03.208622  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:03.223382  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:03.223406  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:03.287357  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:03.287381  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:03.287402  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:03.320653  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:03.320683  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:03.347455  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:03.347501  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:03.377133  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:03.377159  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:03.442737  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:03.442776  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:05.978082  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:05.978455  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:05.978530  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:05.978587  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:06.006237  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:06.006255  263844 cri.go:89] found id: ""
	I1212 00:35:06.006262  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:06.006310  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:06.010366  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:06.010456  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:06.039694  263844 cri.go:89] found id: ""
	I1212 00:35:06.039716  263844 logs.go:282] 0 containers: []
	W1212 00:35:06.039725  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:06.039733  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:06.039785  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:06.065594  263844 cri.go:89] found id: ""
	I1212 00:35:06.065618  263844 logs.go:282] 0 containers: []
	W1212 00:35:06.065628  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:06.065639  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:06.065685  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:06.091181  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:06.091200  263844 cri.go:89] found id: ""
	I1212 00:35:06.091207  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:06.091259  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:06.094942  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:06.095000  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:06.118810  263844 cri.go:89] found id: ""
	I1212 00:35:06.118827  263844 logs.go:282] 0 containers: []
	W1212 00:35:06.118834  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:06.118839  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:06.118881  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:06.143665  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:06.143686  263844 cri.go:89] found id: ""
	I1212 00:35:06.143694  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:06.143746  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:06.147318  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:06.147376  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:06.172858  263844 cri.go:89] found id: ""
	I1212 00:35:06.172883  263844 logs.go:282] 0 containers: []
	W1212 00:35:06.172893  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:06.172901  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:06.172943  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:06.198461  263844 cri.go:89] found id: ""
	I1212 00:35:06.198494  263844 logs.go:282] 0 containers: []
	W1212 00:35:06.198503  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:06.198514  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:06.198529  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:06.224744  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:06.224766  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:06.291819  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:06.291848  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:06.324908  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:06.324941  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:06.411894  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:06.411924  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:06.430682  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:06.430713  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:06.498584  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:06.498606  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:06.498619  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:06.531410  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:06.531436  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	W1212 00:35:02.521725  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:35:05.022110  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	I1212 00:35:07.023465  292217 pod_ready.go:94] pod "coredns-66bc5c9577-8x66p" is "Ready"
	I1212 00:35:07.023510  292217 pod_ready.go:86] duration metric: took 34.506810459s for pod "coredns-66bc5c9577-8x66p" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.026308  292217 pod_ready.go:83] waiting for pod "etcd-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.031339  292217 pod_ready.go:94] pod "etcd-embed-certs-858659" is "Ready"
	I1212 00:35:07.031364  292217 pod_ready.go:86] duration metric: took 5.030782ms for pod "etcd-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.033638  292217 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.037393  292217 pod_ready.go:94] pod "kube-apiserver-embed-certs-858659" is "Ready"
	I1212 00:35:07.037409  292217 pod_ready.go:86] duration metric: took 3.7473ms for pod "kube-apiserver-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.039158  292217 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:05.994458  300250 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:35:05.994507  300250 machine.go:97] duration metric: took 4.251910101s to provisionDockerMachine
	I1212 00:35:05.994521  300250 client.go:176] duration metric: took 10.066395512s to LocalClient.Create
	I1212 00:35:05.994537  300250 start.go:167] duration metric: took 10.066456505s to libmachine.API.Create "default-k8s-diff-port-079970"
	I1212 00:35:05.994547  300250 start.go:293] postStartSetup for "default-k8s-diff-port-079970" (driver="docker")
	I1212 00:35:05.994559  300250 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:35:05.994632  300250 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:35:05.994709  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:06.013629  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:06.193644  300250 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:35:06.197617  300250 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:35:06.197647  300250 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:35:06.197663  300250 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:35:06.197726  300250 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:35:06.197851  300250 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:35:06.197983  300250 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:35:06.206129  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:06.228239  300250 start.go:296] duration metric: took 233.677045ms for postStartSetup
	I1212 00:35:06.228629  300250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-079970
	I1212 00:35:06.249261  300250 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/config.json ...
	I1212 00:35:06.249602  300250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:35:06.249656  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:06.274204  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:06.370994  300250 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:35:06.375520  300250 start.go:128] duration metric: took 10.449576963s to createHost
	I1212 00:35:06.375543  300250 start.go:83] releasing machines lock for "default-k8s-diff-port-079970", held for 10.449707099s
	I1212 00:35:06.375608  300250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-079970
	I1212 00:35:06.394357  300250 ssh_runner.go:195] Run: cat /version.json
	I1212 00:35:06.394412  300250 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:35:06.394417  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:06.394533  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:06.414536  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:06.414896  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:06.513946  300250 ssh_runner.go:195] Run: systemctl --version
	I1212 00:35:06.578632  300250 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:35:06.614844  300250 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:35:06.619698  300250 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:35:06.619768  300250 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:35:06.648376  300250 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:35:06.648400  300250 start.go:496] detecting cgroup driver to use...
	I1212 00:35:06.648437  300250 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:35:06.648518  300250 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:35:06.671629  300250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:35:06.686247  300250 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:35:06.686300  300250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:35:06.705702  300250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:35:06.724679  300250 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:35:06.831577  300250 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:35:06.959398  300250 docker.go:234] disabling docker service ...
	I1212 00:35:06.959458  300250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:35:06.980174  300250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:35:06.994308  300250 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:35:07.087727  300250 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:35:07.164553  300250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:35:07.176958  300250 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:35:07.190774  300250 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:35:07.190827  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.200432  300250 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:35:07.200487  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.208778  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.216974  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.225627  300250 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:35:07.233231  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.241973  300250 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.256118  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.264415  300250 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:35:07.271752  300250 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:35:07.278861  300250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:07.354779  300250 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:35:07.473228  300250 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:35:07.473293  300250 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:35:07.476916  300250 start.go:564] Will wait 60s for crictl version
	I1212 00:35:07.476963  300250 ssh_runner.go:195] Run: which crictl
	I1212 00:35:07.480267  300250 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:35:07.504728  300250 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:35:07.504799  300250 ssh_runner.go:195] Run: crio --version
	I1212 00:35:07.530870  300250 ssh_runner.go:195] Run: crio --version
	I1212 00:35:07.558537  300250 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:35:07.220781  292217 pod_ready.go:94] pod "kube-controller-manager-embed-certs-858659" is "Ready"
	I1212 00:35:07.220800  292217 pod_ready.go:86] duration metric: took 181.625687ms for pod "kube-controller-manager-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.420602  292217 pod_ready.go:83] waiting for pod "kube-proxy-httpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.820186  292217 pod_ready.go:94] pod "kube-proxy-httpr" is "Ready"
	I1212 00:35:07.820209  292217 pod_ready.go:86] duration metric: took 399.582316ms for pod "kube-proxy-httpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:08.021113  292217 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:08.420866  292217 pod_ready.go:94] pod "kube-scheduler-embed-certs-858659" is "Ready"
	I1212 00:35:08.420892  292217 pod_ready.go:86] duration metric: took 399.752565ms for pod "kube-scheduler-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:08.420907  292217 pod_ready.go:40] duration metric: took 35.909864777s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:35:08.463083  292217 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 00:35:08.464948  292217 out.go:179] * Done! kubectl is now configured to use "embed-certs-858659" cluster and "default" namespace by default
	I1212 00:35:07.559551  300250 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-079970 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:35:07.576167  300250 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1212 00:35:07.580098  300250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:07.589991  300250 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-079970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079970 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:35:07.590086  300250 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:35:07.590127  300250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:07.620329  300250 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:07.620351  300250 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:35:07.620400  300250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:07.644998  300250 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:07.645015  300250 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:35:07.645022  300250 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.2 crio true true} ...
	I1212 00:35:07.645104  300250 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-079970 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:35:07.645170  300250 ssh_runner.go:195] Run: crio config
	I1212 00:35:07.690243  300250 cni.go:84] Creating CNI manager for ""
	I1212 00:35:07.690273  300250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:07.690294  300250 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:35:07.690324  300250 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-079970 NodeName:default-k8s-diff-port-079970 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:35:07.690467  300250 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-079970"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:35:07.690549  300250 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:35:07.698235  300250 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:35:07.698297  300250 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:35:07.705827  300250 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1212 00:35:07.717731  300250 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:35:07.731840  300250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1212 00:35:07.743718  300250 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:35:07.747092  300250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:07.756184  300250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:07.837070  300250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:07.858364  300250 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970 for IP: 192.168.103.2
	I1212 00:35:07.858382  300250 certs.go:195] generating shared ca certs ...
	I1212 00:35:07.858403  300250 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:07.858571  300250 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:35:07.858647  300250 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:35:07.858665  300250 certs.go:257] generating profile certs ...
	I1212 00:35:07.858744  300250 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.key
	I1212 00:35:07.858767  300250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.crt with IP's: []
	I1212 00:35:08.016689  300250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.crt ...
	I1212 00:35:08.016714  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.crt: {Name:mk279da736f294eb825962e1a4edee25eac6315c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.016870  300250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.key ...
	I1212 00:35:08.016882  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.key: {Name:mkfdedaa1208476212f31534b586a502d7549554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.016963  300250 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key.0181c396
	I1212 00:35:08.016979  300250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt.0181c396 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1212 00:35:08.149793  300250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt.0181c396 ...
	I1212 00:35:08.149819  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt.0181c396: {Name:mka877547c686ae761ca469a57d769f1b6209dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.149964  300250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key.0181c396 ...
	I1212 00:35:08.149977  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key.0181c396: {Name:mk85552646d74132d4452b62dd9a7c99b447f23e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.150046  300250 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt.0181c396 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt
	I1212 00:35:08.150128  300250 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key.0181c396 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key
	I1212 00:35:08.150185  300250 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.key
	I1212 00:35:08.150200  300250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.crt with IP's: []
	I1212 00:35:08.211959  300250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.crt ...
	I1212 00:35:08.211982  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.crt: {Name:mkf49b46b9109148320675b03a3fb18a2de5b067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.212121  300250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.key ...
	I1212 00:35:08.212135  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.key: {Name:mke8549528b3e93e632d75bea181d060323f48f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.212313  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:35:08.212351  300250 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:35:08.212361  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:35:08.212385  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:35:08.212408  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:35:08.212438  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:35:08.212491  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:08.213045  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:35:08.231539  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:35:08.248058  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:35:08.265115  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:35:08.282076  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 00:35:08.298207  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:35:08.314097  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:35:08.329970  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:35:08.345884  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:35:08.363142  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:35:08.379029  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:35:08.394643  300250 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:35:08.405961  300250 ssh_runner.go:195] Run: openssl version
	I1212 00:35:08.411533  300250 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:08.418428  300250 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:35:08.425583  300250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:08.429027  300250 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:08.429071  300250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:08.466251  300250 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:35:08.473689  300250 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:35:08.482528  300250 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:35:08.493011  300250 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:35:08.500005  300250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:35:08.503427  300250 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:35:08.503503  300250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:35:08.544140  300250 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:35:08.552101  300250 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:35:08.560501  300250 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:35:08.569346  300250 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:35:08.578244  300250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:35:08.582362  300250 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:35:08.582410  300250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:35:08.623691  300250 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:08.631946  300250 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:08.639036  300250 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:35:08.642462  300250 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:35:08.642530  300250 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-079970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079970 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:08.642593  300250 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:35:08.642659  300250 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:35:08.668461  300250 cri.go:89] found id: ""
	I1212 00:35:08.668546  300250 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:35:08.676116  300250 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:35:08.683428  300250 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:35:08.683469  300250 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:35:08.690898  300250 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:35:08.690916  300250 kubeadm.go:158] found existing configuration files:
	
	I1212 00:35:08.690954  300250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 00:35:08.698073  300250 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:35:08.698143  300250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:35:08.705407  300250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 00:35:08.712693  300250 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:35:08.712737  300250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:35:08.719767  300250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 00:35:08.727185  300250 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:35:08.727228  300250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:35:08.736123  300250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 00:35:08.745640  300250 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:35:08.745692  300250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:35:08.752893  300250 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:35:08.793583  300250 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 00:35:08.793654  300250 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:35:08.815256  300250 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:35:08.815324  300250 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 00:35:08.815354  300250 kubeadm.go:319] OS: Linux
	I1212 00:35:08.815415  300250 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:35:08.815505  300250 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:35:08.815591  300250 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:35:08.815662  300250 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:35:08.815736  300250 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:35:08.815811  300250 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:35:08.815859  300250 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:35:08.815899  300250 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 00:35:08.870824  300250 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:35:08.870961  300250 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:35:08.871069  300250 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:35:08.879811  300250 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:35:08.882551  300250 out.go:252]   - Generating certificates and keys ...
	I1212 00:35:08.882660  300250 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:35:08.882751  300250 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:35:09.176928  300250 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:35:09.363520  300250 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:35:09.693528  300250 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:35:09.731067  300250 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:35:09.965845  300250 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:35:09.966025  300250 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-079970 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1212 00:35:10.193416  300250 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:35:10.193638  300250 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-079970 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1212 00:35:10.269537  300250 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:35:06.679223  302677 oci.go:144] the created container "newest-cni-821472" has a running status.
	I1212 00:35:06.679254  302677 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa...
	I1212 00:35:06.738145  302677 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:35:06.769410  302677 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:06.786614  302677 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:35:06.786644  302677 kic_runner.go:114] Args: [docker exec --privileged newest-cni-821472 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:35:06.834201  302677 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:06.855717  302677 machine.go:94] provisionDockerMachine start ...
	I1212 00:35:06.855800  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:06.888834  302677 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:06.889195  302677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 00:35:06.889213  302677 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:35:06.890003  302677 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43636->127.0.0.1:33093: read: connection reset by peer
	I1212 00:35:10.020812  302677 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-821472
	
	I1212 00:35:10.020840  302677 ubuntu.go:182] provisioning hostname "newest-cni-821472"
	I1212 00:35:10.020911  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:10.038793  302677 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:10.039049  302677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 00:35:10.039068  302677 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-821472 && echo "newest-cni-821472" | sudo tee /etc/hostname
	I1212 00:35:10.178884  302677 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-821472
	
	I1212 00:35:10.178957  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:10.196926  302677 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:10.197149  302677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 00:35:10.197168  302677 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-821472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-821472/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-821472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:35:10.325457  302677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:35:10.325500  302677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:35:10.325523  302677 ubuntu.go:190] setting up certificates
	I1212 00:35:10.325534  302677 provision.go:84] configureAuth start
	I1212 00:35:10.325583  302677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:10.343437  302677 provision.go:143] copyHostCerts
	I1212 00:35:10.343515  302677 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:35:10.343528  302677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:35:10.343592  302677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:35:10.343673  302677 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:35:10.343682  302677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:35:10.343707  302677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:35:10.343760  302677 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:35:10.343767  302677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:35:10.343788  302677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:35:10.343834  302677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.newest-cni-821472 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-821472]
	I1212 00:35:10.384676  302677 provision.go:177] copyRemoteCerts
	I1212 00:35:10.384719  302677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:35:10.384774  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:10.402191  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:10.496002  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:35:10.517047  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:35:10.533068  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:35:10.549151  302677 provision.go:87] duration metric: took 223.607325ms to configureAuth
	I1212 00:35:10.549173  302677 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:35:10.549372  302677 config.go:182] Loaded profile config "newest-cni-821472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:10.549507  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:10.566450  302677 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:10.566801  302677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 00:35:10.566825  302677 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:35:10.840386  302677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:35:10.840420  302677 machine.go:97] duration metric: took 3.984678656s to provisionDockerMachine
	I1212 00:35:10.840443  302677 client.go:176] duration metric: took 8.925215047s to LocalClient.Create
	I1212 00:35:10.840469  302677 start.go:167] duration metric: took 8.925275616s to libmachine.API.Create "newest-cni-821472"
	I1212 00:35:10.840505  302677 start.go:293] postStartSetup for "newest-cni-821472" (driver="docker")
	I1212 00:35:10.840523  302677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:35:10.840596  302677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:35:10.840670  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:10.860332  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:10.956443  302677 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:35:10.959764  302677 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:35:10.959792  302677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:35:10.959804  302677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:35:10.959857  302677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:35:10.959954  302677 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:35:10.960087  302677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:35:10.967140  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:10.985778  302677 start.go:296] duration metric: took 145.261225ms for postStartSetup
	I1212 00:35:10.986158  302677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:11.004009  302677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/config.json ...
	I1212 00:35:11.004287  302677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:35:11.004341  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:11.021447  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:11.111879  302677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:35:11.116117  302677 start.go:128] duration metric: took 9.20313041s to createHost
	I1212 00:35:11.116139  302677 start.go:83] releasing machines lock for "newest-cni-821472", held for 9.203283528s
	I1212 00:35:11.116244  302677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:11.133795  302677 ssh_runner.go:195] Run: cat /version.json
	I1212 00:35:11.133840  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:11.133860  302677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:35:11.133944  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:11.152108  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:11.153248  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:11.246020  302677 ssh_runner.go:195] Run: systemctl --version
	I1212 00:35:11.300646  302677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:35:11.334446  302677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:35:11.338640  302677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:35:11.338700  302677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:35:11.362650  302677 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:35:11.362667  302677 start.go:496] detecting cgroup driver to use...
	I1212 00:35:11.362703  302677 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:35:11.362745  302677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:35:11.378085  302677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:35:11.389462  302677 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:35:11.389524  302677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:35:11.405814  302677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:35:11.421748  302677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:35:11.501993  302677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:35:11.586974  302677 docker.go:234] disabling docker service ...
	I1212 00:35:11.587043  302677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:35:11.606777  302677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:35:11.618506  302677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:35:09.059116  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:09.059525  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:09.059584  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:09.059632  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:09.089294  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:09.089312  263844 cri.go:89] found id: ""
	I1212 00:35:09.089319  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:09.089386  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:09.093045  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:09.093104  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:09.118495  263844 cri.go:89] found id: ""
	I1212 00:35:09.118518  263844 logs.go:282] 0 containers: []
	W1212 00:35:09.118527  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:09.118535  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:09.118588  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:09.143107  263844 cri.go:89] found id: ""
	I1212 00:35:09.143128  263844 logs.go:282] 0 containers: []
	W1212 00:35:09.143137  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:09.143144  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:09.143196  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:09.167389  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:09.167405  263844 cri.go:89] found id: ""
	I1212 00:35:09.167414  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:09.167459  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:09.171059  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:09.171107  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:09.195715  263844 cri.go:89] found id: ""
	I1212 00:35:09.195735  263844 logs.go:282] 0 containers: []
	W1212 00:35:09.195744  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:09.195752  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:09.195808  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:09.220144  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:09.220165  263844 cri.go:89] found id: ""
	I1212 00:35:09.220174  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:09.220239  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:09.223868  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:09.223918  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:09.248862  263844 cri.go:89] found id: ""
	I1212 00:35:09.248885  263844 logs.go:282] 0 containers: []
	W1212 00:35:09.248893  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:09.248900  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:09.248955  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:09.275068  263844 cri.go:89] found id: ""
	I1212 00:35:09.275093  263844 logs.go:282] 0 containers: []
	W1212 00:35:09.275102  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:09.275114  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:09.275127  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:09.366099  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:09.366126  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:09.379543  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:09.379566  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:09.432943  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:09.432958  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:09.432969  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:09.463524  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:09.463547  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:09.490294  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:09.490317  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:09.514799  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:09.514828  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:09.571564  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:09.571589  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:10.788385  300250 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:35:11.057552  300250 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:35:11.057688  300250 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:35:11.225689  300250 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:35:11.681036  300250 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:35:11.823928  300250 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:35:11.882037  300250 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:35:12.127371  300250 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:35:12.128027  300250 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:35:12.132435  300250 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:35:11.703202  302677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:35:11.784049  302677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:35:11.794982  302677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:35:11.808444  302677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:35:11.808518  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.818076  302677 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:35:11.818133  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.826275  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.834017  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.841736  302677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:35:11.848971  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.856669  302677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.868903  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.876935  302677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:35:11.883730  302677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:35:11.890166  302677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:11.971228  302677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:35:12.107537  302677 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:35:12.107601  302677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:35:12.111902  302677 start.go:564] Will wait 60s for crictl version
	I1212 00:35:12.111953  302677 ssh_runner.go:195] Run: which crictl
	I1212 00:35:12.115979  302677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:35:12.143022  302677 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:35:12.143092  302677 ssh_runner.go:195] Run: crio --version
	I1212 00:35:12.172766  302677 ssh_runner.go:195] Run: crio --version
	I1212 00:35:12.214057  302677 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 00:35:12.215182  302677 cli_runner.go:164] Run: docker network inspect newest-cni-821472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:35:12.234840  302677 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 00:35:12.238650  302677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:12.250364  302677 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 00:35:12.251584  302677 kubeadm.go:884] updating cluster {Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:35:12.251750  302677 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:12.251814  302677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:12.286886  302677 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:12.286909  302677 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:35:12.286960  302677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:12.312952  302677 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:12.312977  302677 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:35:12.312986  302677 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 00:35:12.313097  302677 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-821472 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:35:12.313180  302677 ssh_runner.go:195] Run: crio config
	I1212 00:35:12.374192  302677 cni.go:84] Creating CNI manager for ""
	I1212 00:35:12.374218  302677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:12.374237  302677 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 00:35:12.374270  302677 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-821472 NodeName:newest-cni-821472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:35:12.374434  302677 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-821472"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:35:12.374535  302677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:35:12.383578  302677 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:35:12.383637  302677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:35:12.391434  302677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:35:12.404377  302677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:35:12.423679  302677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1212 00:35:12.436660  302677 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:35:12.440733  302677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:12.451195  302677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:12.537285  302677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:12.559799  302677 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472 for IP: 192.168.76.2
	I1212 00:35:12.559818  302677 certs.go:195] generating shared ca certs ...
	I1212 00:35:12.559838  302677 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.559992  302677 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:35:12.560043  302677 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:35:12.560055  302677 certs.go:257] generating profile certs ...
	I1212 00:35:12.560117  302677 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.key
	I1212 00:35:12.560142  302677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.crt with IP's: []
	I1212 00:35:12.605767  302677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.crt ...
	I1212 00:35:12.605794  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.crt: {Name:mk62a438d5b5213a1e604f2aad5a254998a9c462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.605974  302677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.key ...
	I1212 00:35:12.605988  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.key: {Name:mk63a3fc23e864057dcaa9c8effbe724759615bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.606086  302677 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key.e08375e0
	I1212 00:35:12.606104  302677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt.e08375e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1212 00:35:12.656429  302677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt.e08375e0 ...
	I1212 00:35:12.656451  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt.e08375e0: {Name:mkcaca46142fb6d4be74e9883db090ebf7e5cf3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.656597  302677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key.e08375e0 ...
	I1212 00:35:12.656613  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key.e08375e0: {Name:mkf4411e99bacc1b752d816d27a1043dcd50d436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.656687  302677 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt.e08375e0 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt
	I1212 00:35:12.656757  302677 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key.e08375e0 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key
	I1212 00:35:12.656810  302677 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key
	I1212 00:35:12.656825  302677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.crt with IP's: []
	I1212 00:35:12.794384  302677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.crt ...
	I1212 00:35:12.794408  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.crt: {Name:mk0075073046a87e6d2960d42dc638b72f046c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.794577  302677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key ...
	I1212 00:35:12.794595  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key: {Name:mk204e192392804e3969ede81a4f299490ab4215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.794762  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:35:12.794798  302677 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:35:12.794808  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:35:12.794839  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:35:12.794875  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:35:12.794899  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:35:12.794953  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:12.795698  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:35:12.813014  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:35:12.829220  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:35:12.845778  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:35:12.861807  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:35:12.878084  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:35:12.895917  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:35:12.912533  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:35:12.928436  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:35:12.946171  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:35:12.962496  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:35:12.978793  302677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:35:12.990605  302677 ssh_runner.go:195] Run: openssl version
	I1212 00:35:12.996246  302677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:35:13.003391  302677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:35:13.010336  302677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:35:13.013787  302677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:35:13.013843  302677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:35:13.047425  302677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:35:13.054597  302677 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:35:13.061519  302677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:35:13.068422  302677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:35:13.075772  302677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:35:13.079428  302677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:35:13.079485  302677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:35:13.113308  302677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:13.120292  302677 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:13.127088  302677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:13.133810  302677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:35:13.140741  302677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:13.144153  302677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:13.144193  302677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:13.177492  302677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:35:13.184429  302677 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:35:13.191968  302677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:35:13.196429  302677 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:35:13.196488  302677 kubeadm.go:401] StartCluster: {Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:13.196559  302677 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:35:13.196600  302677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:35:13.228866  302677 cri.go:89] found id: ""
	I1212 00:35:13.228944  302677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:35:13.238592  302677 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:35:13.246897  302677 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:35:13.247000  302677 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:35:13.254688  302677 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:35:13.254703  302677 kubeadm.go:158] found existing configuration files:
	
	I1212 00:35:13.254736  302677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:35:13.263097  302677 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:35:13.263270  302677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:35:13.273080  302677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:35:13.282211  302677 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:35:13.282268  302677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:35:13.291221  302677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:35:13.302552  302677 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:35:13.302604  302677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:35:13.313834  302677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:35:13.325420  302677 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:35:13.325564  302677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:35:13.336740  302677 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:35:13.383615  302677 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:35:13.383902  302677 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:35:13.451434  302677 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:35:13.451591  302677 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 00:35:13.451650  302677 kubeadm.go:319] OS: Linux
	I1212 00:35:13.451744  302677 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:35:13.451811  302677 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:35:13.451890  302677 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:35:13.451953  302677 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:35:13.452042  302677 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:35:13.452118  302677 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:35:13.452187  302677 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:35:13.452283  302677 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 00:35:13.513534  302677 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:35:13.513688  302677 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:35:13.513825  302677 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:35:13.521572  302677 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:35:13.524580  302677 out.go:252]   - Generating certificates and keys ...
	I1212 00:35:13.524672  302677 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:35:13.524815  302677 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:35:13.627518  302677 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:35:13.704108  302677 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:35:13.773103  302677 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:35:13.794355  302677 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:35:13.913209  302677 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:35:13.913398  302677 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-821472] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 00:35:13.947286  302677 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:35:13.947435  302677 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-821472] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 00:35:14.159651  302677 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:35:14.216650  302677 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:35:14.472885  302677 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:35:14.473061  302677 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:35:14.633454  302677 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:35:14.753641  302677 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:35:14.814612  302677 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:35:14.895913  302677 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:35:14.914803  302677 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:35:14.915315  302677 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:35:14.919969  302677 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:35:12.133773  300250 out.go:252]   - Booting up control plane ...
	I1212 00:35:12.133887  300250 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:35:12.134001  300250 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:35:12.135036  300250 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:35:12.151772  300250 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:35:12.151914  300250 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:35:12.159834  300250 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:35:12.160143  300250 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:35:12.160211  300250 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:35:12.271327  300250 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:35:12.271562  300250 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:35:12.773136  300250 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.727313ms
	I1212 00:35:12.778862  300250 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:35:12.778987  300250 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1212 00:35:12.779121  300250 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:35:12.779237  300250 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:35:14.491599  300250 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.712585121s
	I1212 00:35:15.317576  300250 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.53833817s
	I1212 00:35:14.921519  302677 out.go:252]   - Booting up control plane ...
	I1212 00:35:14.921651  302677 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:35:14.924749  302677 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:35:14.924842  302677 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:35:14.943271  302677 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:35:14.943423  302677 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:35:14.949795  302677 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:35:14.950158  302677 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:35:14.950225  302677 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:35:15.059243  302677 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:35:15.059468  302677 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:35:15.560643  302677 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.54438ms
	I1212 00:35:15.564093  302677 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:35:15.564228  302677 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1212 00:35:15.564345  302677 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:35:15.564457  302677 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:35:16.569708  302677 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005463869s
	I1212 00:35:12.100744  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:12.101121  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:12.101177  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:12.101233  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:12.130434  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:12.130457  263844 cri.go:89] found id: ""
	I1212 00:35:12.130467  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:12.130537  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:12.134919  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:12.134991  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:12.165688  263844 cri.go:89] found id: ""
	I1212 00:35:12.165712  263844 logs.go:282] 0 containers: []
	W1212 00:35:12.165723  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:12.165735  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:12.165800  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:12.200067  263844 cri.go:89] found id: ""
	I1212 00:35:12.200095  263844 logs.go:282] 0 containers: []
	W1212 00:35:12.200105  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:12.200114  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:12.200175  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:12.229100  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:12.229122  263844 cri.go:89] found id: ""
	I1212 00:35:12.229132  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:12.229188  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:12.233550  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:12.233611  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:12.263168  263844 cri.go:89] found id: ""
	I1212 00:35:12.263191  263844 logs.go:282] 0 containers: []
	W1212 00:35:12.263197  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:12.263203  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:12.263249  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:12.290993  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:12.291011  263844 cri.go:89] found id: ""
	I1212 00:35:12.291020  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:12.291082  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:12.294886  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:12.294950  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:12.324024  263844 cri.go:89] found id: ""
	I1212 00:35:12.324047  263844 logs.go:282] 0 containers: []
	W1212 00:35:12.324056  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:12.324064  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:12.324126  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:12.351428  263844 cri.go:89] found id: ""
	I1212 00:35:12.351452  263844 logs.go:282] 0 containers: []
	W1212 00:35:12.351462  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:12.351498  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:12.351516  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:12.385548  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:12.385577  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:12.416157  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:12.416182  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:12.442624  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:12.442646  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:12.502981  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:12.503012  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:12.532856  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:12.532883  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:12.653112  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:12.653143  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:12.668189  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:12.668212  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:12.724901  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:15.225542  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:15.225965  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:15.226024  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:15.226088  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:15.254653  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:15.254676  263844 cri.go:89] found id: ""
	I1212 00:35:15.254685  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:15.254771  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:15.260402  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:15.260491  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:15.305137  263844 cri.go:89] found id: ""
	I1212 00:35:15.305160  263844 logs.go:282] 0 containers: []
	W1212 00:35:15.305171  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:15.305178  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:15.305245  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:15.333505  263844 cri.go:89] found id: ""
	I1212 00:35:15.333538  263844 logs.go:282] 0 containers: []
	W1212 00:35:15.333552  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:15.333561  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:15.333614  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:15.361356  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:15.361380  263844 cri.go:89] found id: ""
	I1212 00:35:15.361389  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:15.361453  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:15.365357  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:15.365419  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:15.396683  263844 cri.go:89] found id: ""
	I1212 00:35:15.396704  263844 logs.go:282] 0 containers: []
	W1212 00:35:15.396711  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:15.396717  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:15.396773  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:15.429111  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:15.429152  263844 cri.go:89] found id: ""
	I1212 00:35:15.429163  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:15.429219  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:15.433130  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:15.433183  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:15.458694  263844 cri.go:89] found id: ""
	I1212 00:35:15.458722  263844 logs.go:282] 0 containers: []
	W1212 00:35:15.458732  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:15.458740  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:15.458801  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:15.502451  263844 cri.go:89] found id: ""
	I1212 00:35:15.502498  263844 logs.go:282] 0 containers: []
	W1212 00:35:15.502535  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:15.502550  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:15.502570  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:15.533938  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:15.533965  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:15.561307  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:15.561331  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:15.613786  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:15.613818  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:15.643661  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:15.643695  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:15.720562  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:15.720586  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:15.734392  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:15.734426  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:15.787130  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:15.787150  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:15.787162  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:17.280767  300250 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501711696s
	I1212 00:35:17.298890  300250 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:35:17.306798  300250 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:35:17.316204  300250 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:35:17.316535  300250 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-079970 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:35:17.324332  300250 kubeadm.go:319] [bootstrap-token] Using token: vzntbo.0n3hslrivx4nbk6h
	I1212 00:35:17.325581  300250 out.go:252]   - Configuring RBAC rules ...
	I1212 00:35:17.325727  300250 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:35:17.328649  300250 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:35:17.333384  300250 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:35:17.335765  300250 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:35:17.338603  300250 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:35:17.340971  300250 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:35:17.686781  300250 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:35:18.102084  300250 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 00:35:18.687720  300250 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 00:35:18.689043  300250 kubeadm.go:319] 
	I1212 00:35:18.689133  300250 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 00:35:18.689152  300250 kubeadm.go:319] 
	I1212 00:35:18.689249  300250 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 00:35:18.689270  300250 kubeadm.go:319] 
	I1212 00:35:18.689313  300250 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 00:35:18.689387  300250 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:35:18.689456  300250 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:35:18.689466  300250 kubeadm.go:319] 
	I1212 00:35:18.689568  300250 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 00:35:18.689580  300250 kubeadm.go:319] 
	I1212 00:35:18.689626  300250 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:35:18.689634  300250 kubeadm.go:319] 
	I1212 00:35:18.689709  300250 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 00:35:18.689831  300250 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:35:18.689940  300250 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:35:18.689950  300250 kubeadm.go:319] 
	I1212 00:35:18.690061  300250 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:35:18.690186  300250 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 00:35:18.690195  300250 kubeadm.go:319] 
	I1212 00:35:18.690314  300250 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token vzntbo.0n3hslrivx4nbk6h \
	I1212 00:35:18.690496  300250 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f \
	I1212 00:35:18.690545  300250 kubeadm.go:319] 	--control-plane 
	I1212 00:35:18.690563  300250 kubeadm.go:319] 
	I1212 00:35:18.690686  300250 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:35:18.690696  300250 kubeadm.go:319] 
	I1212 00:35:18.690812  300250 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token vzntbo.0n3hslrivx4nbk6h \
	I1212 00:35:18.690954  300250 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f 
	I1212 00:35:18.694339  300250 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 00:35:18.694550  300250 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:35:18.694567  300250 cni.go:84] Creating CNI manager for ""
	I1212 00:35:18.694576  300250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:18.696703  300250 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 00:35:17.160850  302677 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.596679207s
	I1212 00:35:19.066502  302677 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502324833s
	I1212 00:35:19.085057  302677 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:35:19.096897  302677 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:35:19.107177  302677 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:35:19.107466  302677 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-821472 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:35:19.115974  302677 kubeadm.go:319] [bootstrap-token] Using token: 1hw5s9.frvasufed6x8ofpi
	I1212 00:35:19.117834  302677 out.go:252]   - Configuring RBAC rules ...
	I1212 00:35:19.117975  302677 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:35:19.120835  302677 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:35:19.125865  302677 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:35:19.127972  302677 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:35:19.130202  302677 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:35:19.132498  302677 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:35:19.472755  302677 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:35:19.889183  302677 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 00:35:20.472283  302677 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 00:35:20.473228  302677 kubeadm.go:319] 
	I1212 00:35:20.473347  302677 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 00:35:20.473382  302677 kubeadm.go:319] 
	I1212 00:35:20.473514  302677 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 00:35:20.473529  302677 kubeadm.go:319] 
	I1212 00:35:20.473564  302677 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 00:35:20.473653  302677 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:35:20.473726  302677 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:35:20.473742  302677 kubeadm.go:319] 
	I1212 00:35:20.473830  302677 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 00:35:20.473840  302677 kubeadm.go:319] 
	I1212 00:35:20.473908  302677 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:35:20.473917  302677 kubeadm.go:319] 
	I1212 00:35:20.473990  302677 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 00:35:20.474112  302677 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:35:20.474211  302677 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:35:20.474220  302677 kubeadm.go:319] 
	I1212 00:35:20.474319  302677 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:35:20.474421  302677 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 00:35:20.474431  302677 kubeadm.go:319] 
	I1212 00:35:20.474566  302677 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1hw5s9.frvasufed6x8ofpi \
	I1212 00:35:20.474716  302677 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f \
	I1212 00:35:20.474777  302677 kubeadm.go:319] 	--control-plane 
	I1212 00:35:20.474785  302677 kubeadm.go:319] 
	I1212 00:35:20.474895  302677 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:35:20.474908  302677 kubeadm.go:319] 
	I1212 00:35:20.475015  302677 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1hw5s9.frvasufed6x8ofpi \
	I1212 00:35:20.475137  302677 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f 
	I1212 00:35:20.477381  302677 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 00:35:20.477537  302677 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:35:20.477568  302677 cni.go:84] Creating CNI manager for ""
	I1212 00:35:20.477580  302677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:20.479065  302677 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 00:35:18.698118  300250 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:35:18.702449  300250 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 00:35:18.702465  300250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 00:35:18.716877  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:35:18.969312  300250 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:35:18.969392  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:18.969443  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-079970 minikube.k8s.io/updated_at=2025_12_12T00_35_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=default-k8s-diff-port-079970 minikube.k8s.io/primary=true
	I1212 00:35:18.980253  300250 ops.go:34] apiserver oom_adj: -16
	I1212 00:35:19.044290  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:19.544400  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:20.044663  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:20.544656  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:20.480077  302677 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:35:20.484251  302677 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1212 00:35:20.484270  302677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 00:35:20.498227  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:35:20.722785  302677 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:35:20.722859  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:20.722863  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-821472 minikube.k8s.io/updated_at=2025_12_12T00_35_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=newest-cni-821472 minikube.k8s.io/primary=true
	I1212 00:35:20.792681  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:20.792694  302677 ops.go:34] apiserver oom_adj: -16
	I1212 00:35:21.293770  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:18.318161  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:18.318666  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:18.318729  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:18.318786  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:18.352544  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:18.352567  263844 cri.go:89] found id: ""
	I1212 00:35:18.352577  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:18.352636  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:18.357312  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:18.357378  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:18.386881  263844 cri.go:89] found id: ""
	I1212 00:35:18.386902  263844 logs.go:282] 0 containers: []
	W1212 00:35:18.386912  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:18.386919  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:18.386973  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:18.417064  263844 cri.go:89] found id: ""
	I1212 00:35:18.417088  263844 logs.go:282] 0 containers: []
	W1212 00:35:18.417099  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:18.417107  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:18.417167  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:18.447543  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:18.447568  263844 cri.go:89] found id: ""
	I1212 00:35:18.447578  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:18.447647  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:18.452360  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:18.452420  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:18.481882  263844 cri.go:89] found id: ""
	I1212 00:35:18.481911  263844 logs.go:282] 0 containers: []
	W1212 00:35:18.481923  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:18.481931  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:18.481984  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:18.522675  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:18.522698  263844 cri.go:89] found id: ""
	I1212 00:35:18.522707  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:18.522770  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:18.527549  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:18.527622  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:18.557639  263844 cri.go:89] found id: ""
	I1212 00:35:18.557663  263844 logs.go:282] 0 containers: []
	W1212 00:35:18.557673  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:18.557680  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:18.557741  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:18.587529  263844 cri.go:89] found id: ""
	I1212 00:35:18.587558  263844 logs.go:282] 0 containers: []
	W1212 00:35:18.587568  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:18.587582  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:18.587600  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:18.617911  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:18.617944  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:18.677434  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:18.677462  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:18.710192  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:18.710220  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:18.813216  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:18.813247  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:18.829843  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:18.829876  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:18.895288  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:18.895311  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:18.895326  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:18.928602  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:18.928632  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:21.468717  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:21.469075  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:21.469120  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:21.469168  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:21.494583  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:21.494599  263844 cri.go:89] found id: ""
	I1212 00:35:21.494607  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:21.494665  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:21.498756  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:21.498842  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:21.524669  263844 cri.go:89] found id: ""
	I1212 00:35:21.524694  263844 logs.go:282] 0 containers: []
	W1212 00:35:21.524704  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:21.524710  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:21.524752  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:21.550501  263844 cri.go:89] found id: ""
	I1212 00:35:21.550525  263844 logs.go:282] 0 containers: []
	W1212 00:35:21.550537  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:21.550544  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:21.550599  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:21.576757  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:21.576778  263844 cri.go:89] found id: ""
	I1212 00:35:21.576787  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:21.576840  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:21.580808  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:21.580869  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:21.610338  263844 cri.go:89] found id: ""
	I1212 00:35:21.610365  263844 logs.go:282] 0 containers: []
	W1212 00:35:21.610380  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:21.610387  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:21.610458  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:21.636138  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:21.636157  263844 cri.go:89] found id: ""
	I1212 00:35:21.636164  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:21.636218  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:21.640049  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:21.640100  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:21.665873  263844 cri.go:89] found id: ""
	I1212 00:35:21.665897  263844 logs.go:282] 0 containers: []
	W1212 00:35:21.665905  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:21.665913  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:21.665980  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:21.692021  263844 cri.go:89] found id: ""
	I1212 00:35:21.692046  263844 logs.go:282] 0 containers: []
	W1212 00:35:21.692057  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:21.692068  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:21.692082  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:21.783812  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:21.783843  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:21.799265  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:21.799297  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:21.858012  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:21.858034  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:21.858055  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:21.888680  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:21.888707  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:21.914847  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:21.914873  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:21.939973  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:21.939996  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:21.993411  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:21.993438  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:21.044408  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:21.545165  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:22.045161  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:22.544651  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:23.044599  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:23.544880  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:23.613988  300250 kubeadm.go:1114] duration metric: took 4.644651054s to wait for elevateKubeSystemPrivileges
	I1212 00:35:23.614027  300250 kubeadm.go:403] duration metric: took 14.971500685s to StartCluster
	I1212 00:35:23.614050  300250 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:23.614149  300250 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:35:23.616371  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:23.616639  300250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:35:23.616643  300250 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:35:23.616699  300250 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:35:23.616817  300250 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-079970"
	I1212 00:35:23.616827  300250 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-079970"
	I1212 00:35:23.616837  300250 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-079970"
	I1212 00:35:23.616844  300250 config.go:182] Loaded profile config "default-k8s-diff-port-079970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:35:23.616853  300250 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-079970"
	I1212 00:35:23.616871  300250 host.go:66] Checking if "default-k8s-diff-port-079970" exists ...
	I1212 00:35:23.617208  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Status}}
	I1212 00:35:23.617376  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Status}}
	I1212 00:35:23.620577  300250 out.go:179] * Verifying Kubernetes components...
	I1212 00:35:23.621791  300250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:23.640258  300250 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:35:23.641348  300250 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:35:23.641367  300250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:35:23.641447  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:23.642838  300250 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-079970"
	I1212 00:35:23.642881  300250 host.go:66] Checking if "default-k8s-diff-port-079970" exists ...
	I1212 00:35:23.643360  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Status}}
	I1212 00:35:23.666418  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:23.675582  300250 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:35:23.675604  300250 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:35:23.675684  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:23.703381  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:23.722052  300250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:35:23.796379  300250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:35:23.797433  300250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:23.826413  300250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:35:23.939621  300250 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1212 00:35:24.216756  300250 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-079970" to be "Ready" ...
	I1212 00:35:24.221321  300250 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Dec 12 00:34:42 embed-certs-858659 crio[569]: time="2025-12-12T00:34:42.260114195Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 00:34:42 embed-certs-858659 crio[569]: time="2025-12-12T00:34:42.26344945Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 00:34:42 embed-certs-858659 crio[569]: time="2025-12-12T00:34:42.263469038Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.486742292Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=73ca98c1-b646-4f43-b127-e5df985cbe34 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.489994689Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4af99606-83b0-4a21-9b58-203d1f91c6bb name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.493923025Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm/dashboard-metrics-scraper" id=af31a627-410c-4532-a4d0-c21db5542dde name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.494115848Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.504155832Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.505087114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.54103308Z" level=info msg="Created container 7d19b30263df4363fbd74e18b1a88cb116c9d902be0ebb3b0098d7308ab71d50: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm/dashboard-metrics-scraper" id=af31a627-410c-4532-a4d0-c21db5542dde name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.54189676Z" level=info msg="Starting container: 7d19b30263df4363fbd74e18b1a88cb116c9d902be0ebb3b0098d7308ab71d50" id=6660415d-bfca-44a5-8ec0-f61809ed07e6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.543976036Z" level=info msg="Started container" PID=1760 containerID=7d19b30263df4363fbd74e18b1a88cb116c9d902be0ebb3b0098d7308ab71d50 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm/dashboard-metrics-scraper id=6660415d-bfca-44a5-8ec0-f61809ed07e6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7600ad226ffddfbd7d0d21211f90fc33c01c657f0743ec2baec27e209bbfdb0a
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.592694345Z" level=info msg="Removing container: 81f4be7d18cc2451154ee17fd0ddac97db4eecb8e770e25398c1782f33cccd5a" id=6300f71a-e44a-47b5-9c9d-0c540bfdeb9b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:34:56 embed-certs-858659 crio[569]: time="2025-12-12T00:34:56.603676874Z" level=info msg="Removed container 81f4be7d18cc2451154ee17fd0ddac97db4eecb8e770e25398c1782f33cccd5a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm/dashboard-metrics-scraper" id=6300f71a-e44a-47b5-9c9d-0c540bfdeb9b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.611133138Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=980d6bd8-e5f2-4d16-a626-3f5a12b76287 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.612192795Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c162c074-0482-4522-b488-d0d245a4f30e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.61453923Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a968f8d5-2475-4d58-83c6-1d73a2967ad0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.614674543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.618973785Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.619139312Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/57f00149134251d7f24f5fe16a0eef6bbec934bff7e247a8b7ce0c76b0ad6142/merged/etc/passwd: no such file or directory"
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.619166785Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/57f00149134251d7f24f5fe16a0eef6bbec934bff7e247a8b7ce0c76b0ad6142/merged/etc/group: no such file or directory"
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.620413948Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.653558593Z" level=info msg="Created container 5d164991478d239f54b5a6f53f3bcbddadcbc4baf2cd934eea9b27ca5f2ea741: kube-system/storage-provisioner/storage-provisioner" id=a968f8d5-2475-4d58-83c6-1d73a2967ad0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.654140099Z" level=info msg="Starting container: 5d164991478d239f54b5a6f53f3bcbddadcbc4baf2cd934eea9b27ca5f2ea741" id=19d95e9c-f9cd-4b0d-9c8f-5e22b7caa94c name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:35:02 embed-certs-858659 crio[569]: time="2025-12-12T00:35:02.655868671Z" level=info msg="Started container" PID=1777 containerID=5d164991478d239f54b5a6f53f3bcbddadcbc4baf2cd934eea9b27ca5f2ea741 description=kube-system/storage-provisioner/storage-provisioner id=19d95e9c-f9cd-4b0d-9c8f-5e22b7caa94c name=/runtime.v1.RuntimeService/StartContainer sandboxID=ea8821dfddcd916891f5107961fd4975e0c85bbe07877cf3f7aed9a973ed9b02
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5d164991478d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   ea8821dfddcd9       storage-provisioner                          kube-system
	7d19b30263df4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago      Exited              dashboard-metrics-scraper   2                   7600ad226ffdd       dashboard-metrics-scraper-6ffb444bf9-52czm   kubernetes-dashboard
	70c68a4579255       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   0e3e8604a746a       kubernetes-dashboard-855c9754f9-4fw4k        kubernetes-dashboard
	ca1dd79202ff4       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   1b93e5e35c341       busybox                                      default
	7c809ac147187       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   4d038ff0c6484       coredns-66bc5c9577-8x66p                     kube-system
	3c9dcfc0a39b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   ea8821dfddcd9       storage-provisioner                          kube-system
	06a70ae8e015e       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           53 seconds ago      Running             kube-proxy                  0                   e9b88dfe031a4       kube-proxy-httpr                             kube-system
	0427b9f7a3afc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   06ac5092b2f13       kindnet-9jvdg                                kube-system
	6eb73517ebee8       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           56 seconds ago      Running             kube-controller-manager     0                   757495dd1b037       kube-controller-manager-embed-certs-858659   kube-system
	07f4a35d8d4d1       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           56 seconds ago      Running             kube-scheduler              0                   511cd44010edd       kube-scheduler-embed-certs-858659            kube-system
	a4dfa21dd3b08       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           56 seconds ago      Running             kube-apiserver              0                   a1b2f474a9233       kube-apiserver-embed-certs-858659            kube-system
	3da8c9c634a05       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           56 seconds ago      Running             etcd                        0                   59006ad0a99b6       etcd-embed-certs-858659                      kube-system
	
	
	==> coredns [7c809ac147187b6cddcd652c01f5ca9264456d90ed5b50ad626d6a454686cea5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35018 - 23966 "HINFO IN 7819491733879595867.4297871292551262055. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.120876135s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-858659
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-858659
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=embed-certs-858659
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_33_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:33:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-858659
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:35:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:35:11 +0000   Fri, 12 Dec 2025 00:33:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:35:11 +0000   Fri, 12 Dec 2025 00:33:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:35:11 +0000   Fri, 12 Dec 2025 00:33:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:35:11 +0000   Fri, 12 Dec 2025 00:34:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-858659
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                116d1391-d680-420b-9323-ddc7dc668b8a
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-8x66p                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-embed-certs-858659                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-9jvdg                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-embed-certs-858659             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-embed-certs-858659    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-httpr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-embed-certs-858659             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-52czm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4fw4k         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 53s                  kube-proxy       
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)  kubelet          Node embed-certs-858659 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)  kubelet          Node embed-certs-858659 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x8 over 114s)  kubelet          Node embed-certs-858659 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    109s                 kubelet          Node embed-certs-858659 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  109s                 kubelet          Node embed-certs-858659 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     109s                 kubelet          Node embed-certs-858659 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node embed-certs-858659 event: Registered Node embed-certs-858659 in Controller
	  Normal  NodeReady                92s                  kubelet          Node embed-certs-858659 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node embed-certs-858659 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node embed-certs-858659 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node embed-certs-858659 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                  node-controller  Node embed-certs-858659 event: Registered Node embed-certs-858659 in Controller
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [3da8c9c634a0528028b8a0875544b6a0c41e72dc8bf1ff1da95beccf80094376] <==
	{"level":"warn","ts":"2025-12-12T00:34:30.154501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.165652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.172974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.180865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.187180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.193090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.199375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.205697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.212120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.222654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.231259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.242349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.250530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.258396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.265384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.272882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.280745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.289209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.295280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.302672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.309925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.327242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.335071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.342250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:34:30.389741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34088","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:35:25 up  1:17,  0 user,  load average: 5.06, 3.39, 2.08
	Linux embed-certs-858659 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0427b9f7a3afccc4063d5b86ecc7f448e0192c4c5812428503d97bac33bfe9cd] <==
	I1212 00:34:32.039180       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 00:34:32.039438       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1212 00:34:32.039628       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:34:32.039649       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:34:32.039670       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:34:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:34:32.328599       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:34:32.328685       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:34:32.328699       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:34:32.328830       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 00:34:32.729057       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:34:32.729091       1 metrics.go:72] Registering metrics
	I1212 00:34:32.729176       1 controller.go:711] "Syncing nftables rules"
	I1212 00:34:42.245940       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 00:34:42.245990       1 main.go:301] handling current node
	I1212 00:34:52.246791       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 00:34:52.246837       1 main.go:301] handling current node
	I1212 00:35:02.246842       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 00:35:02.246887       1 main.go:301] handling current node
	I1212 00:35:12.246996       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 00:35:12.247039       1 main.go:301] handling current node
	I1212 00:35:22.254561       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 00:35:22.254594       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a4dfa21dd3b082f3a9464428da14520464aa97d1551c9a8ddffbf56d063877a6] <==
	I1212 00:34:30.876895       1 aggregator.go:171] initial CRD sync complete...
	I1212 00:34:30.876907       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 00:34:30.876914       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 00:34:30.876920       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:34:30.876965       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1212 00:34:30.876997       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 00:34:30.877013       1 policy_source.go:240] refreshing policies
	I1212 00:34:30.877082       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 00:34:30.877121       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 00:34:30.877891       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 00:34:30.884332       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1212 00:34:30.885237       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 00:34:30.892241       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:34:30.904246       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 00:34:31.217739       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 00:34:31.246919       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:34:31.266164       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:34:31.273913       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:34:31.281770       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:34:31.315930       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.172.208"}
	I1212 00:34:31.326794       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.35.38"}
	I1212 00:34:31.781969       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 00:34:33.750094       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:34:33.851798       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 00:34:34.001462       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6eb73517ebee81333693231ad0296e35a758eeb5aa9cc45c7ee7663ea6e652e4] <==
	I1212 00:34:33.339307       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 00:34:33.342585       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 00:34:33.343945       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1212 00:34:33.345943       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1212 00:34:33.346186       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1212 00:34:33.347115       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 00:34:33.347149       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1212 00:34:33.347209       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 00:34:33.347304       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 00:34:33.347415       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-858659"
	I1212 00:34:33.347456       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1212 00:34:33.347528       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 00:34:33.347565       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 00:34:33.347962       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 00:34:33.348307       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1212 00:34:33.348462       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 00:34:33.349971       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 00:34:33.350066       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 00:34:33.354306       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 00:34:33.354351       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 00:34:33.366204       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1212 00:34:33.447231       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 00:34:33.447248       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 00:34:33.447255       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 00:34:33.467005       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [06a70ae8e015e6ed73dee2a76a938f3ac6d6569ec98202ecea145fbd5fdd2e6e] <==
	I1212 00:34:31.922771       1 server_linux.go:53] "Using iptables proxy"
	I1212 00:34:31.989779       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 00:34:32.090652       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 00:34:32.090733       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1212 00:34:32.090846       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:34:32.114692       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:34:32.114762       1 server_linux.go:132] "Using iptables Proxier"
	I1212 00:34:32.121168       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:34:32.121628       1 server.go:527] "Version info" version="v1.34.2"
	I1212 00:34:32.121660       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:34:32.124869       1 config.go:200] "Starting service config controller"
	I1212 00:34:32.124893       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:34:32.124911       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:34:32.124916       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:34:32.124929       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:34:32.124935       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:34:32.125622       1 config.go:309] "Starting node config controller"
	I1212 00:34:32.125792       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:34:32.225214       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 00:34:32.225230       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 00:34:32.225230       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 00:34:32.225966       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [07f4a35d8d4d1bc19d0c0dc6b015f381bc482dc379a9a416e57528498aa8e1e5] <==
	I1212 00:34:29.815614       1 serving.go:386] Generated self-signed cert in-memory
	W1212 00:34:30.815462       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 00:34:30.815535       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 00:34:30.815551       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 00:34:30.815560       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 00:34:30.856300       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1212 00:34:30.856407       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:34:30.859303       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:34:30.859390       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:34:30.860416       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 00:34:30.860502       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 00:34:30.960445       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 00:34:34 embed-certs-858659 kubelet[734]: I1212 00:34:34.036948     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqhpc\" (UniqueName: \"kubernetes.io/projected/f2596c32-24e3-46ba-946a-60b89b5e73dc-kube-api-access-qqhpc\") pod \"kubernetes-dashboard-855c9754f9-4fw4k\" (UID: \"f2596c32-24e3-46ba-946a-60b89b5e73dc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4fw4k"
	Dec 12 00:34:34 embed-certs-858659 kubelet[734]: I1212 00:34:34.037000     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f2596c32-24e3-46ba-946a-60b89b5e73dc-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-4fw4k\" (UID: \"f2596c32-24e3-46ba-946a-60b89b5e73dc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4fw4k"
	Dec 12 00:34:36 embed-certs-858659 kubelet[734]: I1212 00:34:36.820666     734 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 12 00:34:37 embed-certs-858659 kubelet[734]: I1212 00:34:37.534120     734 scope.go:117] "RemoveContainer" containerID="507e33e6e4a97a9696d02408549febf4eb7070392e528ae4fb9562613b1b3760"
	Dec 12 00:34:38 embed-certs-858659 kubelet[734]: I1212 00:34:38.540625     734 scope.go:117] "RemoveContainer" containerID="507e33e6e4a97a9696d02408549febf4eb7070392e528ae4fb9562613b1b3760"
	Dec 12 00:34:38 embed-certs-858659 kubelet[734]: I1212 00:34:38.540963     734 scope.go:117] "RemoveContainer" containerID="81f4be7d18cc2451154ee17fd0ddac97db4eecb8e770e25398c1782f33cccd5a"
	Dec 12 00:34:38 embed-certs-858659 kubelet[734]: E1212 00:34:38.541152     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-52czm_kubernetes-dashboard(0b7097d6-9dda-4e0d-8310-995b885116d7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm" podUID="0b7097d6-9dda-4e0d-8310-995b885116d7"
	Dec 12 00:34:39 embed-certs-858659 kubelet[734]: I1212 00:34:39.546465     734 scope.go:117] "RemoveContainer" containerID="81f4be7d18cc2451154ee17fd0ddac97db4eecb8e770e25398c1782f33cccd5a"
	Dec 12 00:34:39 embed-certs-858659 kubelet[734]: E1212 00:34:39.546726     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-52czm_kubernetes-dashboard(0b7097d6-9dda-4e0d-8310-995b885116d7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm" podUID="0b7097d6-9dda-4e0d-8310-995b885116d7"
	Dec 12 00:34:42 embed-certs-858659 kubelet[734]: I1212 00:34:42.563613     734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4fw4k" podStartSLOduration=2.27918682 podStartE2EDuration="9.563589117s" podCreationTimestamp="2025-12-12 00:34:33 +0000 UTC" firstStartedPulling="2025-12-12 00:34:34.263037667 +0000 UTC m=+5.861040122" lastFinishedPulling="2025-12-12 00:34:41.547439954 +0000 UTC m=+13.145442419" observedRunningTime="2025-12-12 00:34:42.563318238 +0000 UTC m=+14.161320706" watchObservedRunningTime="2025-12-12 00:34:42.563589117 +0000 UTC m=+14.161591588"
	Dec 12 00:34:43 embed-certs-858659 kubelet[734]: I1212 00:34:43.201361     734 scope.go:117] "RemoveContainer" containerID="81f4be7d18cc2451154ee17fd0ddac97db4eecb8e770e25398c1782f33cccd5a"
	Dec 12 00:34:43 embed-certs-858659 kubelet[734]: E1212 00:34:43.201554     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-52czm_kubernetes-dashboard(0b7097d6-9dda-4e0d-8310-995b885116d7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm" podUID="0b7097d6-9dda-4e0d-8310-995b885116d7"
	Dec 12 00:34:56 embed-certs-858659 kubelet[734]: I1212 00:34:56.485298     734 scope.go:117] "RemoveContainer" containerID="81f4be7d18cc2451154ee17fd0ddac97db4eecb8e770e25398c1782f33cccd5a"
	Dec 12 00:34:56 embed-certs-858659 kubelet[734]: I1212 00:34:56.591311     734 scope.go:117] "RemoveContainer" containerID="81f4be7d18cc2451154ee17fd0ddac97db4eecb8e770e25398c1782f33cccd5a"
	Dec 12 00:34:56 embed-certs-858659 kubelet[734]: I1212 00:34:56.592020     734 scope.go:117] "RemoveContainer" containerID="7d19b30263df4363fbd74e18b1a88cb116c9d902be0ebb3b0098d7308ab71d50"
	Dec 12 00:34:56 embed-certs-858659 kubelet[734]: E1212 00:34:56.592330     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-52czm_kubernetes-dashboard(0b7097d6-9dda-4e0d-8310-995b885116d7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm" podUID="0b7097d6-9dda-4e0d-8310-995b885116d7"
	Dec 12 00:35:02 embed-certs-858659 kubelet[734]: I1212 00:35:02.610673     734 scope.go:117] "RemoveContainer" containerID="3c9dcfc0a39b026ebae759fa5108c0241e5b83a83111d1a9c78bde6446593eed"
	Dec 12 00:35:03 embed-certs-858659 kubelet[734]: I1212 00:35:03.201789     734 scope.go:117] "RemoveContainer" containerID="7d19b30263df4363fbd74e18b1a88cb116c9d902be0ebb3b0098d7308ab71d50"
	Dec 12 00:35:03 embed-certs-858659 kubelet[734]: E1212 00:35:03.202018     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-52czm_kubernetes-dashboard(0b7097d6-9dda-4e0d-8310-995b885116d7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm" podUID="0b7097d6-9dda-4e0d-8310-995b885116d7"
	Dec 12 00:35:13 embed-certs-858659 kubelet[734]: I1212 00:35:13.485245     734 scope.go:117] "RemoveContainer" containerID="7d19b30263df4363fbd74e18b1a88cb116c9d902be0ebb3b0098d7308ab71d50"
	Dec 12 00:35:13 embed-certs-858659 kubelet[734]: E1212 00:35:13.485419     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-52czm_kubernetes-dashboard(0b7097d6-9dda-4e0d-8310-995b885116d7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-52czm" podUID="0b7097d6-9dda-4e0d-8310-995b885116d7"
	Dec 12 00:35:20 embed-certs-858659 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 00:35:20 embed-certs-858659 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 00:35:20 embed-certs-858659 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:20 embed-certs-858659 systemd[1]: kubelet.service: Consumed 1.594s CPU time.
	
	
	==> kubernetes-dashboard [70c68a457925502a9aee2ba9aecc60dbf1e189971126f7728a2e5d3dad2af8c7] <==
	2025/12/12 00:34:41 Using namespace: kubernetes-dashboard
	2025/12/12 00:34:41 Using in-cluster config to connect to apiserver
	2025/12/12 00:34:41 Using secret token for csrf signing
	2025/12/12 00:34:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/12 00:34:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/12 00:34:41 Successful initial request to the apiserver, version: v1.34.2
	2025/12/12 00:34:41 Generating JWE encryption key
	2025/12/12 00:34:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/12 00:34:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/12 00:34:41 Initializing JWE encryption key from synchronized object
	2025/12/12 00:34:41 Creating in-cluster Sidecar client
	2025/12/12 00:34:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 00:34:41 Serving insecurely on HTTP port: 9090
	2025/12/12 00:35:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 00:34:41 Starting overwatch
	
	
	==> storage-provisioner [3c9dcfc0a39b026ebae759fa5108c0241e5b83a83111d1a9c78bde6446593eed] <==
	I1212 00:34:31.879311       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 00:35:01.884527       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5d164991478d239f54b5a6f53f3bcbddadcbc4baf2cd934eea9b27ca5f2ea741] <==
	I1212 00:35:02.667946       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:35:02.675840       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:35:02.675875       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1212 00:35:02.678103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:06.133389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:10.393970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:13.992491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:17.047722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:20.069630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:20.075118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 00:35:20.075289       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:35:20.075450       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-858659_d6aa2683-35ae-49e4-bebf-421db8cefbff!
	I1212 00:35:20.075449       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"78ef122f-55d8-421e-a9ec-895d80aa214b", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-858659_d6aa2683-35ae-49e4-bebf-421db8cefbff became leader
	W1212 00:35:20.077604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:20.080985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 00:35:20.175691       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-858659_d6aa2683-35ae-49e4-bebf-421db8cefbff!
	W1212 00:35:22.087919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:22.100201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:24.107972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:24.113244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-858659 -n embed-certs-858659
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-858659 -n embed-certs-858659: exit status 2 (340.847337ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-858659 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-821472 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-821472 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (254.440423ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:35:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-821472 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-821472
helpers_test.go:244: (dbg) docker inspect newest-cni-821472:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c",
	        "Created": "2025-12-12T00:35:06.315057819Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304148,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:35:06.348101667Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c/hosts",
	        "LogPath": "/var/lib/docker/containers/a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c/a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c-json.log",
	        "Name": "/newest-cni-821472",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-821472:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-821472",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c",
	                "LowerDir": "/var/lib/docker/overlay2/94ca977471cdab415e18f1f9c4bacf7ebf57c40f63fdaf3b9ea3179b78961cc0-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94ca977471cdab415e18f1f9c4bacf7ebf57c40f63fdaf3b9ea3179b78961cc0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94ca977471cdab415e18f1f9c4bacf7ebf57c40f63fdaf3b9ea3179b78961cc0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94ca977471cdab415e18f1f9c4bacf7ebf57c40f63fdaf3b9ea3179b78961cc0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-821472",
	                "Source": "/var/lib/docker/volumes/newest-cni-821472/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-821472",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-821472",
	                "name.minikube.sigs.k8s.io": "newest-cni-821472",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5be303e145a5aae8171fe173ffd72bfd7ad15d618f288e4b04ceb4f749efb0e1",
	            "SandboxKey": "/var/run/docker/netns/5be303e145a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-821472": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "575ab5e56d9527c2eb921586a6877a45ff36317f0e61f2f54d90ea7972b9e6b3",
	                    "EndpointID": "aa90715ec30ed326cc6274246a574e0e09489d880e1f74b78f1947e5d8a956f2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "2e:30:c2:0a:3b:c7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-821472",
	                        "a4f2642ba7b2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-821472 -n newest-cni-821472
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-821472 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p no-preload-675290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ stop    │ -p no-preload-675290 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-858659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ stop    │ -p embed-certs-858659 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-743506 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p no-preload-675290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-858659 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:35 UTC │
	│ image   │ old-k8s-version-743506 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p old-k8s-version-743506 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p old-k8s-version-743506                                                                                                                                                                                                                            │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ image   │ no-preload-675290 image list --format=json                                                                                                                                                                                                           │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p no-preload-675290 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p old-k8s-version-743506                                                                                                                                                                                                                            │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ delete  │ -p disable-driver-mounts-039387                                                                                                                                                                                                                      │ disable-driver-mounts-039387 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p default-k8s-diff-port-079970 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079970 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p no-preload-675290                                                                                                                                                                                                                                 │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:35 UTC │
	│ delete  │ -p no-preload-675290                                                                                                                                                                                                                                 │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ start   │ -p newest-cni-821472 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image   │ embed-certs-858659 image list --format=json                                                                                                                                                                                                          │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ pause   │ -p embed-certs-858659 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-821472 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ delete  │ -p embed-certs-858659                                                                                                                                                                                                                                │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:35:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:35:01.676845  302677 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:01.676993  302677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:01.677000  302677 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:01.677007  302677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:01.677289  302677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:35:01.677788  302677 out.go:368] Setting JSON to false
	I1212 00:35:01.679324  302677 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4648,"bootTime":1765495054,"procs":422,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:35:01.679417  302677 start.go:143] virtualization: kvm guest
	I1212 00:35:01.681915  302677 out.go:179] * [newest-cni-821472] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:35:01.683469  302677 notify.go:221] Checking for updates...
	I1212 00:35:01.684214  302677 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:35:01.685594  302677 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:35:01.687066  302677 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:35:01.689936  302677 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:35:01.694337  302677 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:35:01.696522  302677 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:35:01.698402  302677 config.go:182] Loaded profile config "default-k8s-diff-port-079970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:35:01.698569  302677 config.go:182] Loaded profile config "embed-certs-858659": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:35:01.698687  302677 config.go:182] Loaded profile config "kubernetes-upgrade-605797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:01.698885  302677 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:35:01.726288  302677 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:35:01.726426  302677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:01.811670  302677 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-12 00:35:01.799849378 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:35:01.811833  302677 docker.go:319] overlay module found
	I1212 00:35:01.813434  302677 out.go:179] * Using the docker driver based on user configuration
	I1212 00:35:01.814701  302677 start.go:309] selected driver: docker
	I1212 00:35:01.814716  302677 start.go:927] validating driver "docker" against <nil>
	I1212 00:35:01.814728  302677 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:35:01.815393  302677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:01.879143  302677 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-12 00:35:01.868556648 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:35:01.879427  302677 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1212 00:35:01.879507  302677 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1212 00:35:01.879785  302677 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 00:35:01.882050  302677 out.go:179] * Using Docker driver with root privileges
	I1212 00:35:01.883265  302677 cni.go:84] Creating CNI manager for ""
	I1212 00:35:01.883332  302677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:01.883343  302677 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:35:01.883409  302677 start.go:353] cluster config:
	{Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:01.885115  302677 out.go:179] * Starting "newest-cni-821472" primary control-plane node in "newest-cni-821472" cluster
	I1212 00:35:01.886758  302677 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:35:01.888146  302677 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:35:01.889263  302677 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:01.889310  302677 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1212 00:35:01.889328  302677 cache.go:65] Caching tarball of preloaded images
	I1212 00:35:01.889363  302677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:35:01.889427  302677 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:35:01.889449  302677 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 00:35:01.889596  302677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/config.json ...
	I1212 00:35:01.889624  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/config.json: {Name:mk8e6ad7ce238dbea537fa1dd3602a56e56a71c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:01.912683  302677 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:35:01.912708  302677 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:35:01.912721  302677 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:35:01.912755  302677 start.go:360] acquireMachinesLock for newest-cni-821472: {Name:mk1920b4afd40f764aad092389429d0db04875a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:35:01.912844  302677 start.go:364] duration metric: took 68.015µs to acquireMachinesLock for "newest-cni-821472"
	I1212 00:35:01.912868  302677 start.go:93] Provisioning new machine with config: &{Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:35:01.912974  302677 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:34:57.061451  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:34:57.061496  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:34:57.078642  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:34:57.078672  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:34:57.147934  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:34:57.147960  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:34:57.147976  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:57.187226  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:34:57.187259  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:57.217714  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:34:57.217752  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:34:57.247042  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:34:57.247073  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:34:57.306744  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:34:57.306777  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:34:59.839534  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:34:59.839948  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:34:59.839995  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:34:59.840045  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:34:59.865677  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:34:59.865695  263844 cri.go:89] found id: ""
	I1212 00:34:59.865702  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:34:59.865745  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:59.869496  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:34:59.869563  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:34:59.894925  263844 cri.go:89] found id: ""
	I1212 00:34:59.894952  263844 logs.go:282] 0 containers: []
	W1212 00:34:59.894961  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:34:59.894969  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:34:59.895019  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:34:59.919987  263844 cri.go:89] found id: ""
	I1212 00:34:59.920013  263844 logs.go:282] 0 containers: []
	W1212 00:34:59.920027  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:34:59.920035  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:34:59.920089  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:34:59.949410  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:34:59.949436  263844 cri.go:89] found id: ""
	I1212 00:34:59.949445  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:34:59.949527  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:34:59.953413  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:34:59.953491  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:34:59.981859  263844 cri.go:89] found id: ""
	I1212 00:34:59.981886  263844 logs.go:282] 0 containers: []
	W1212 00:34:59.981897  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:34:59.981905  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:34:59.981958  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:00.014039  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:00.014064  263844 cri.go:89] found id: ""
	I1212 00:35:00.014093  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:00.014157  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:00.019033  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:00.019101  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:00.047071  263844 cri.go:89] found id: ""
	I1212 00:35:00.047098  263844 logs.go:282] 0 containers: []
	W1212 00:35:00.047110  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:00.047132  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:00.047182  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:00.074181  263844 cri.go:89] found id: ""
	I1212 00:35:00.074200  263844 logs.go:282] 0 containers: []
	W1212 00:35:00.074214  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:00.074222  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:00.074234  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:00.151605  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:00.151642  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:00.166946  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:00.166974  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:00.225208  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:00.225231  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:00.225245  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:00.258713  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:00.258739  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:00.286626  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:00.286654  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:00.311078  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:00.311101  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:00.366630  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:00.366659  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 00:34:57.522454  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:35:00.022987  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	I1212 00:35:01.428421  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Running}}
	I1212 00:35:01.447336  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Status}}
	I1212 00:35:01.468796  300250 cli_runner.go:164] Run: docker exec default-k8s-diff-port-079970 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:35:01.515960  300250 oci.go:144] the created container "default-k8s-diff-port-079970" has a running status.
	I1212 00:35:01.515989  300250 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa...
	I1212 00:35:01.609883  300250 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:35:01.636104  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Status}}
	I1212 00:35:01.657225  300250 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:35:01.657249  300250 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-079970 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:35:01.719543  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Status}}
	I1212 00:35:01.742574  300250 machine.go:94] provisionDockerMachine start ...
	I1212 00:35:01.742665  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:01.772749  300250 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:01.773093  300250 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1212 00:35:01.773112  300250 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:35:01.775590  300250 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58832->127.0.0.1:33088: read: connection reset by peer
	I1212 00:35:04.907004  300250 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-079970
	
	I1212 00:35:04.907029  300250 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-079970"
	I1212 00:35:04.907083  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:04.925088  300250 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:04.925306  300250 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1212 00:35:04.925325  300250 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-079970 && echo "default-k8s-diff-port-079970" | sudo tee /etc/hostname
	I1212 00:35:05.104218  300250 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-079970
	
	I1212 00:35:05.104321  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:05.122538  300250 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:05.122808  300250 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1212 00:35:05.122836  300250 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-079970' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-079970/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-079970' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:35:05.254825  300250 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:35:05.254864  300250 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:35:05.254892  300250 ubuntu.go:190] setting up certificates
	I1212 00:35:05.254908  300250 provision.go:84] configureAuth start
	I1212 00:35:05.254998  300250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-079970
	I1212 00:35:05.273295  300250 provision.go:143] copyHostCerts
	I1212 00:35:05.273350  300250 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:35:05.273360  300250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:35:05.273417  300250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:35:05.273528  300250 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:35:05.273537  300250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:35:05.273566  300250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:35:05.273626  300250 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:35:05.273633  300250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:35:05.273656  300250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:35:05.273705  300250 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-079970 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-079970 localhost minikube]
	I1212 00:35:05.296916  300250 provision.go:177] copyRemoteCerts
	I1212 00:35:05.296964  300250 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:35:05.296999  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:05.315032  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:05.408808  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:35:05.456913  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1212 00:35:05.473342  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:35:05.490425  300250 provision.go:87] duration metric: took 235.494662ms to configureAuth
	I1212 00:35:05.490445  300250 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:35:05.490622  300250 config.go:182] Loaded profile config "default-k8s-diff-port-079970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:35:05.490727  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:05.507963  300250 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:05.508252  300250 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1212 00:35:05.508277  300250 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:35:01.914973  302677 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 00:35:01.915193  302677 start.go:159] libmachine.API.Create for "newest-cni-821472" (driver="docker")
	I1212 00:35:01.915221  302677 client.go:173] LocalClient.Create starting
	I1212 00:35:01.915286  302677 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem
	I1212 00:35:01.915328  302677 main.go:143] libmachine: Decoding PEM data...
	I1212 00:35:01.915347  302677 main.go:143] libmachine: Parsing certificate...
	I1212 00:35:01.915411  302677 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem
	I1212 00:35:01.915433  302677 main.go:143] libmachine: Decoding PEM data...
	I1212 00:35:01.915443  302677 main.go:143] libmachine: Parsing certificate...
	I1212 00:35:01.915761  302677 cli_runner.go:164] Run: docker network inspect newest-cni-821472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:35:01.931948  302677 cli_runner.go:211] docker network inspect newest-cni-821472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:35:01.932008  302677 network_create.go:284] running [docker network inspect newest-cni-821472] to gather additional debugging logs...
	I1212 00:35:01.932032  302677 cli_runner.go:164] Run: docker network inspect newest-cni-821472
	W1212 00:35:01.950167  302677 cli_runner.go:211] docker network inspect newest-cni-821472 returned with exit code 1
	I1212 00:35:01.950190  302677 network_create.go:287] error running [docker network inspect newest-cni-821472]: docker network inspect newest-cni-821472: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-821472 not found
	I1212 00:35:01.950205  302677 network_create.go:289] output of [docker network inspect newest-cni-821472]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-821472 not found
	
	** /stderr **
	I1212 00:35:01.950338  302677 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:35:01.968471  302677 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ad72354fdcf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:1e:a7:00:22:62} reservation:<nil>}
	I1212 00:35:01.969916  302677 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-18bf2e8051c8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:8e:8f:a6:9d:0c} reservation:<nil>}
	I1212 00:35:01.970685  302677 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e5251ddf0a35 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:5a:cf:fd:1d:2a} reservation:<nil>}
	I1212 00:35:01.971365  302677 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b830d0}
	I1212 00:35:01.971387  302677 network_create.go:124] attempt to create docker network newest-cni-821472 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1212 00:35:01.971442  302677 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-821472 newest-cni-821472
	I1212 00:35:02.021201  302677 network_create.go:108] docker network newest-cni-821472 192.168.76.0/24 created
	I1212 00:35:02.021230  302677 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-821472" container
	I1212 00:35:02.021299  302677 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:35:02.039525  302677 cli_runner.go:164] Run: docker volume create newest-cni-821472 --label name.minikube.sigs.k8s.io=newest-cni-821472 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:35:02.058234  302677 oci.go:103] Successfully created a docker volume newest-cni-821472
	I1212 00:35:02.058323  302677 cli_runner.go:164] Run: docker run --rm --name newest-cni-821472-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-821472 --entrypoint /usr/bin/test -v newest-cni-821472:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 00:35:02.440426  302677 oci.go:107] Successfully prepared a docker volume newest-cni-821472
	I1212 00:35:02.440502  302677 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:02.440515  302677 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:35:02.440587  302677 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-821472:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:35:06.233731  302677 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-821472:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.793098349s)
	I1212 00:35:06.233764  302677 kic.go:203] duration metric: took 3.793245572s to extract preloaded images to volume ...
	W1212 00:35:06.233853  302677 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 00:35:06.233893  302677 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 00:35:06.233935  302677 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:35:06.298519  302677 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-821472 --name newest-cni-821472 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-821472 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-821472 --network newest-cni-821472 --ip 192.168.76.2 --volume newest-cni-821472:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 00:35:06.590905  302677 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Running}}
	I1212 00:35:06.609781  302677 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:06.629292  302677 cli_runner.go:164] Run: docker exec newest-cni-821472 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:35:02.896994  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:02.897634  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:02.897694  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:02.897758  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:02.924538  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:02.924572  263844 cri.go:89] found id: ""
	I1212 00:35:02.924582  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:02.924639  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:02.928500  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:02.928556  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:02.957640  263844 cri.go:89] found id: ""
	I1212 00:35:02.957663  263844 logs.go:282] 0 containers: []
	W1212 00:35:02.957675  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:02.957682  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:02.957749  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:02.984525  263844 cri.go:89] found id: ""
	I1212 00:35:02.984548  263844 logs.go:282] 0 containers: []
	W1212 00:35:02.984558  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:02.984566  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:02.984634  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:03.010719  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:03.010742  263844 cri.go:89] found id: ""
	I1212 00:35:03.010751  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:03.010804  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:03.014657  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:03.014720  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:03.040940  263844 cri.go:89] found id: ""
	I1212 00:35:03.040963  263844 logs.go:282] 0 containers: []
	W1212 00:35:03.040973  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:03.040980  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:03.041037  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:03.067543  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:03.067569  263844 cri.go:89] found id: ""
	I1212 00:35:03.067580  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:03.067641  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:03.071653  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:03.071720  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:03.099943  263844 cri.go:89] found id: ""
	I1212 00:35:03.099969  263844 logs.go:282] 0 containers: []
	W1212 00:35:03.099980  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:03.099988  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:03.100045  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:03.127350  263844 cri.go:89] found id: ""
	I1212 00:35:03.127376  263844 logs.go:282] 0 containers: []
	W1212 00:35:03.127388  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:03.127400  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:03.127416  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:03.208587  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:03.208622  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:03.223382  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:03.223406  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:03.287357  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:03.287381  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:03.287402  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:03.320653  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:03.320683  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:03.347455  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:03.347501  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:03.377133  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:03.377159  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:03.442737  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:03.442776  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:05.978082  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:05.978455  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:05.978530  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:05.978587  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:06.006237  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:06.006255  263844 cri.go:89] found id: ""
	I1212 00:35:06.006262  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:06.006310  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:06.010366  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:06.010456  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:06.039694  263844 cri.go:89] found id: ""
	I1212 00:35:06.039716  263844 logs.go:282] 0 containers: []
	W1212 00:35:06.039725  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:06.039733  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:06.039785  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:06.065594  263844 cri.go:89] found id: ""
	I1212 00:35:06.065618  263844 logs.go:282] 0 containers: []
	W1212 00:35:06.065628  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:06.065639  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:06.065685  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:06.091181  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:06.091200  263844 cri.go:89] found id: ""
	I1212 00:35:06.091207  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:06.091259  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:06.094942  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:06.095000  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:06.118810  263844 cri.go:89] found id: ""
	I1212 00:35:06.118827  263844 logs.go:282] 0 containers: []
	W1212 00:35:06.118834  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:06.118839  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:06.118881  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:06.143665  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:06.143686  263844 cri.go:89] found id: ""
	I1212 00:35:06.143694  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:06.143746  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:06.147318  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:06.147376  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:06.172858  263844 cri.go:89] found id: ""
	I1212 00:35:06.172883  263844 logs.go:282] 0 containers: []
	W1212 00:35:06.172893  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:06.172901  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:06.172943  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:06.198461  263844 cri.go:89] found id: ""
	I1212 00:35:06.198494  263844 logs.go:282] 0 containers: []
	W1212 00:35:06.198503  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:06.198514  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:06.198529  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:06.224744  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:06.224766  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:06.291819  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:06.291848  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:06.324908  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:06.324941  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:06.411894  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:06.411924  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:06.430682  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:06.430713  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:06.498584  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:06.498606  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:06.498619  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:06.531410  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:06.531436  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	W1212 00:35:02.521725  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	W1212 00:35:05.022110  292217 pod_ready.go:104] pod "coredns-66bc5c9577-8x66p" is not "Ready", error: <nil>
	I1212 00:35:07.023465  292217 pod_ready.go:94] pod "coredns-66bc5c9577-8x66p" is "Ready"
	I1212 00:35:07.023510  292217 pod_ready.go:86] duration metric: took 34.506810459s for pod "coredns-66bc5c9577-8x66p" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.026308  292217 pod_ready.go:83] waiting for pod "etcd-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.031339  292217 pod_ready.go:94] pod "etcd-embed-certs-858659" is "Ready"
	I1212 00:35:07.031364  292217 pod_ready.go:86] duration metric: took 5.030782ms for pod "etcd-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.033638  292217 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.037393  292217 pod_ready.go:94] pod "kube-apiserver-embed-certs-858659" is "Ready"
	I1212 00:35:07.037409  292217 pod_ready.go:86] duration metric: took 3.7473ms for pod "kube-apiserver-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.039158  292217 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:05.994458  300250 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:35:05.994507  300250 machine.go:97] duration metric: took 4.251910101s to provisionDockerMachine
	I1212 00:35:05.994521  300250 client.go:176] duration metric: took 10.066395512s to LocalClient.Create
	I1212 00:35:05.994537  300250 start.go:167] duration metric: took 10.066456505s to libmachine.API.Create "default-k8s-diff-port-079970"
	I1212 00:35:05.994547  300250 start.go:293] postStartSetup for "default-k8s-diff-port-079970" (driver="docker")
	I1212 00:35:05.994559  300250 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:35:05.994632  300250 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:35:05.994709  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:06.013629  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:06.193644  300250 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:35:06.197617  300250 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:35:06.197647  300250 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:35:06.197663  300250 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:35:06.197726  300250 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:35:06.197851  300250 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:35:06.197983  300250 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:35:06.206129  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:06.228239  300250 start.go:296] duration metric: took 233.677045ms for postStartSetup
	I1212 00:35:06.228629  300250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-079970
	I1212 00:35:06.249261  300250 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/config.json ...
	I1212 00:35:06.249602  300250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:35:06.249656  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:06.274204  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:06.370994  300250 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:35:06.375520  300250 start.go:128] duration metric: took 10.449576963s to createHost
	I1212 00:35:06.375543  300250 start.go:83] releasing machines lock for "default-k8s-diff-port-079970", held for 10.449707099s
	I1212 00:35:06.375608  300250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-079970
	I1212 00:35:06.394357  300250 ssh_runner.go:195] Run: cat /version.json
	I1212 00:35:06.394412  300250 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:35:06.394417  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:06.394533  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:06.414536  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:06.414896  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:06.513946  300250 ssh_runner.go:195] Run: systemctl --version
	I1212 00:35:06.578632  300250 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:35:06.614844  300250 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:35:06.619698  300250 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:35:06.619768  300250 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:35:06.648376  300250 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:35:06.648400  300250 start.go:496] detecting cgroup driver to use...
	I1212 00:35:06.648437  300250 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:35:06.648518  300250 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:35:06.671629  300250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:35:06.686247  300250 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:35:06.686300  300250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:35:06.705702  300250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:35:06.724679  300250 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:35:06.831577  300250 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:35:06.959398  300250 docker.go:234] disabling docker service ...
	I1212 00:35:06.959458  300250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:35:06.980174  300250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:35:06.994308  300250 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:35:07.087727  300250 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:35:07.164553  300250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:35:07.176958  300250 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:35:07.190774  300250 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:35:07.190827  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.200432  300250 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:35:07.200487  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.208778  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.216974  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.225627  300250 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:35:07.233231  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.241973  300250 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.256118  300250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:07.264415  300250 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:35:07.271752  300250 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:35:07.278861  300250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:07.354779  300250 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:35:07.473228  300250 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:35:07.473293  300250 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:35:07.476916  300250 start.go:564] Will wait 60s for crictl version
	I1212 00:35:07.476963  300250 ssh_runner.go:195] Run: which crictl
	I1212 00:35:07.480267  300250 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:35:07.504728  300250 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:35:07.504799  300250 ssh_runner.go:195] Run: crio --version
	I1212 00:35:07.530870  300250 ssh_runner.go:195] Run: crio --version
	I1212 00:35:07.558537  300250 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:35:07.220781  292217 pod_ready.go:94] pod "kube-controller-manager-embed-certs-858659" is "Ready"
	I1212 00:35:07.220800  292217 pod_ready.go:86] duration metric: took 181.625687ms for pod "kube-controller-manager-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.420602  292217 pod_ready.go:83] waiting for pod "kube-proxy-httpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:07.820186  292217 pod_ready.go:94] pod "kube-proxy-httpr" is "Ready"
	I1212 00:35:07.820209  292217 pod_ready.go:86] duration metric: took 399.582316ms for pod "kube-proxy-httpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:08.021113  292217 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:08.420866  292217 pod_ready.go:94] pod "kube-scheduler-embed-certs-858659" is "Ready"
	I1212 00:35:08.420892  292217 pod_ready.go:86] duration metric: took 399.752565ms for pod "kube-scheduler-embed-certs-858659" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:08.420907  292217 pod_ready.go:40] duration metric: took 35.909864777s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:35:08.463083  292217 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 00:35:08.464948  292217 out.go:179] * Done! kubectl is now configured to use "embed-certs-858659" cluster and "default" namespace by default
	I1212 00:35:07.559551  300250 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-079970 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:35:07.576167  300250 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1212 00:35:07.580098  300250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:07.589991  300250 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-079970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079970 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:35:07.590086  300250 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:35:07.590127  300250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:07.620329  300250 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:07.620351  300250 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:35:07.620400  300250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:07.644998  300250 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:07.645015  300250 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:35:07.645022  300250 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.2 crio true true} ...
	I1212 00:35:07.645104  300250 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-079970 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:35:07.645170  300250 ssh_runner.go:195] Run: crio config
	I1212 00:35:07.690243  300250 cni.go:84] Creating CNI manager for ""
	I1212 00:35:07.690273  300250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:07.690294  300250 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:35:07.690324  300250 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-079970 NodeName:default-k8s-diff-port-079970 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:35:07.690467  300250 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-079970"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:35:07.690549  300250 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:35:07.698235  300250 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:35:07.698297  300250 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:35:07.705827  300250 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1212 00:35:07.717731  300250 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:35:07.731840  300250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1212 00:35:07.743718  300250 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:35:07.747092  300250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:07.756184  300250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:07.837070  300250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:07.858364  300250 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970 for IP: 192.168.103.2
	I1212 00:35:07.858382  300250 certs.go:195] generating shared ca certs ...
	I1212 00:35:07.858403  300250 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:07.858571  300250 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:35:07.858647  300250 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:35:07.858665  300250 certs.go:257] generating profile certs ...
	I1212 00:35:07.858744  300250 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.key
	I1212 00:35:07.858767  300250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.crt with IP's: []
	I1212 00:35:08.016689  300250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.crt ...
	I1212 00:35:08.016714  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.crt: {Name:mk279da736f294eb825962e1a4edee25eac6315c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.016870  300250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.key ...
	I1212 00:35:08.016882  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/client.key: {Name:mkfdedaa1208476212f31534b586a502d7549554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.016963  300250 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key.0181c396
	I1212 00:35:08.016979  300250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt.0181c396 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1212 00:35:08.149793  300250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt.0181c396 ...
	I1212 00:35:08.149819  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt.0181c396: {Name:mka877547c686ae761ca469a57d769f1b6209dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.149964  300250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key.0181c396 ...
	I1212 00:35:08.149977  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key.0181c396: {Name:mk85552646d74132d4452b62dd9a7c99b447f23e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.150046  300250 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt.0181c396 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt
	I1212 00:35:08.150128  300250 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key.0181c396 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key
	I1212 00:35:08.150185  300250 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.key
	I1212 00:35:08.150200  300250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.crt with IP's: []
	I1212 00:35:08.211959  300250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.crt ...
	I1212 00:35:08.211982  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.crt: {Name:mkf49b46b9109148320675b03a3fb18a2de5b067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.212121  300250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.key ...
	I1212 00:35:08.212135  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.key: {Name:mke8549528b3e93e632d75bea181d060323f48f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:08.212313  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:35:08.212351  300250 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:35:08.212361  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:35:08.212385  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:35:08.212408  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:35:08.212438  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:35:08.212491  300250 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:08.213045  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:35:08.231539  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:35:08.248058  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:35:08.265115  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:35:08.282076  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 00:35:08.298207  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:35:08.314097  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:35:08.329970  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/default-k8s-diff-port-079970/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:35:08.345884  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:35:08.363142  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:35:08.379029  300250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:35:08.394643  300250 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:35:08.405961  300250 ssh_runner.go:195] Run: openssl version
	I1212 00:35:08.411533  300250 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:08.418428  300250 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:35:08.425583  300250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:08.429027  300250 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:08.429071  300250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:08.466251  300250 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:35:08.473689  300250 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:35:08.482528  300250 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:35:08.493011  300250 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:35:08.500005  300250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:35:08.503427  300250 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:35:08.503503  300250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:35:08.544140  300250 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:35:08.552101  300250 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:35:08.560501  300250 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:35:08.569346  300250 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:35:08.578244  300250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:35:08.582362  300250 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:35:08.582410  300250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:35:08.623691  300250 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:08.631946  300250 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:08.639036  300250 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:35:08.642462  300250 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:35:08.642530  300250 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-079970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079970 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:08.642593  300250 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:35:08.642659  300250 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:35:08.668461  300250 cri.go:89] found id: ""
	I1212 00:35:08.668546  300250 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:35:08.676116  300250 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:35:08.683428  300250 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:35:08.683469  300250 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:35:08.690898  300250 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:35:08.690916  300250 kubeadm.go:158] found existing configuration files:
	
	I1212 00:35:08.690954  300250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 00:35:08.698073  300250 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:35:08.698143  300250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:35:08.705407  300250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 00:35:08.712693  300250 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:35:08.712737  300250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:35:08.719767  300250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 00:35:08.727185  300250 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:35:08.727228  300250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:35:08.736123  300250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 00:35:08.745640  300250 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:35:08.745692  300250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:35:08.752893  300250 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:35:08.793583  300250 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 00:35:08.793654  300250 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:35:08.815256  300250 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:35:08.815324  300250 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 00:35:08.815354  300250 kubeadm.go:319] OS: Linux
	I1212 00:35:08.815415  300250 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:35:08.815505  300250 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:35:08.815591  300250 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:35:08.815662  300250 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:35:08.815736  300250 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:35:08.815811  300250 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:35:08.815859  300250 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:35:08.815899  300250 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 00:35:08.870824  300250 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:35:08.870961  300250 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:35:08.871069  300250 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:35:08.879811  300250 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:35:08.882551  300250 out.go:252]   - Generating certificates and keys ...
	I1212 00:35:08.882660  300250 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:35:08.882751  300250 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:35:09.176928  300250 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:35:09.363520  300250 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:35:09.693528  300250 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:35:09.731067  300250 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:35:09.965845  300250 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:35:09.966025  300250 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-079970 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1212 00:35:10.193416  300250 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:35:10.193638  300250 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-079970 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1212 00:35:10.269537  300250 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:35:06.679223  302677 oci.go:144] the created container "newest-cni-821472" has a running status.
	I1212 00:35:06.679254  302677 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa...
	I1212 00:35:06.738145  302677 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:35:06.769410  302677 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:06.786614  302677 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:35:06.786644  302677 kic_runner.go:114] Args: [docker exec --privileged newest-cni-821472 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:35:06.834201  302677 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:06.855717  302677 machine.go:94] provisionDockerMachine start ...
	I1212 00:35:06.855800  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:06.888834  302677 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:06.889195  302677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 00:35:06.889213  302677 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:35:06.890003  302677 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43636->127.0.0.1:33093: read: connection reset by peer
	I1212 00:35:10.020812  302677 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-821472
	
	I1212 00:35:10.020840  302677 ubuntu.go:182] provisioning hostname "newest-cni-821472"
	I1212 00:35:10.020911  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:10.038793  302677 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:10.039049  302677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 00:35:10.039068  302677 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-821472 && echo "newest-cni-821472" | sudo tee /etc/hostname
	I1212 00:35:10.178884  302677 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-821472
	
	I1212 00:35:10.178957  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:10.196926  302677 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:10.197149  302677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 00:35:10.197168  302677 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-821472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-821472/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-821472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:35:10.325457  302677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:35:10.325500  302677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:35:10.325523  302677 ubuntu.go:190] setting up certificates
	I1212 00:35:10.325534  302677 provision.go:84] configureAuth start
	I1212 00:35:10.325583  302677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:10.343437  302677 provision.go:143] copyHostCerts
	I1212 00:35:10.343515  302677 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:35:10.343528  302677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:35:10.343592  302677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:35:10.343673  302677 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:35:10.343682  302677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:35:10.343707  302677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:35:10.343760  302677 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:35:10.343767  302677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:35:10.343788  302677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:35:10.343834  302677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.newest-cni-821472 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-821472]
	I1212 00:35:10.384676  302677 provision.go:177] copyRemoteCerts
	I1212 00:35:10.384719  302677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:35:10.384774  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:10.402191  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:10.496002  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:35:10.517047  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:35:10.533068  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:35:10.549151  302677 provision.go:87] duration metric: took 223.607325ms to configureAuth
	I1212 00:35:10.549173  302677 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:35:10.549372  302677 config.go:182] Loaded profile config "newest-cni-821472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:10.549507  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:10.566450  302677 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:10.566801  302677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 00:35:10.566825  302677 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:35:10.840386  302677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:35:10.840420  302677 machine.go:97] duration metric: took 3.984678656s to provisionDockerMachine
	I1212 00:35:10.840443  302677 client.go:176] duration metric: took 8.925215047s to LocalClient.Create
	I1212 00:35:10.840469  302677 start.go:167] duration metric: took 8.925275616s to libmachine.API.Create "newest-cni-821472"
	I1212 00:35:10.840505  302677 start.go:293] postStartSetup for "newest-cni-821472" (driver="docker")
	I1212 00:35:10.840523  302677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:35:10.840596  302677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:35:10.840670  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:10.860332  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:10.956443  302677 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:35:10.959764  302677 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:35:10.959792  302677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:35:10.959804  302677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:35:10.959857  302677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:35:10.959954  302677 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:35:10.960087  302677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:35:10.967140  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:10.985778  302677 start.go:296] duration metric: took 145.261225ms for postStartSetup
	I1212 00:35:10.986158  302677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:11.004009  302677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/config.json ...
	I1212 00:35:11.004287  302677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:35:11.004341  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:11.021447  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:11.111879  302677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:35:11.116117  302677 start.go:128] duration metric: took 9.20313041s to createHost
	I1212 00:35:11.116139  302677 start.go:83] releasing machines lock for "newest-cni-821472", held for 9.203283528s
	I1212 00:35:11.116244  302677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:11.133795  302677 ssh_runner.go:195] Run: cat /version.json
	I1212 00:35:11.133840  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:11.133860  302677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:35:11.133944  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:11.152108  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:11.153248  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:11.246020  302677 ssh_runner.go:195] Run: systemctl --version
	I1212 00:35:11.300646  302677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:35:11.334446  302677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:35:11.338640  302677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:35:11.338700  302677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:35:11.362650  302677 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:35:11.362667  302677 start.go:496] detecting cgroup driver to use...
	I1212 00:35:11.362703  302677 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:35:11.362745  302677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:35:11.378085  302677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:35:11.389462  302677 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:35:11.389524  302677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:35:11.405814  302677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:35:11.421748  302677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:35:11.501993  302677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:35:11.586974  302677 docker.go:234] disabling docker service ...
	I1212 00:35:11.587043  302677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:35:11.606777  302677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:35:11.618506  302677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:35:09.059116  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:09.059525  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:09.059584  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:09.059632  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:09.089294  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:09.089312  263844 cri.go:89] found id: ""
	I1212 00:35:09.089319  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:09.089386  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:09.093045  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:09.093104  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:09.118495  263844 cri.go:89] found id: ""
	I1212 00:35:09.118518  263844 logs.go:282] 0 containers: []
	W1212 00:35:09.118527  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:09.118535  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:09.118588  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:09.143107  263844 cri.go:89] found id: ""
	I1212 00:35:09.143128  263844 logs.go:282] 0 containers: []
	W1212 00:35:09.143137  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:09.143144  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:09.143196  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:09.167389  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:09.167405  263844 cri.go:89] found id: ""
	I1212 00:35:09.167414  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:09.167459  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:09.171059  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:09.171107  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:09.195715  263844 cri.go:89] found id: ""
	I1212 00:35:09.195735  263844 logs.go:282] 0 containers: []
	W1212 00:35:09.195744  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:09.195752  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:09.195808  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:09.220144  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:09.220165  263844 cri.go:89] found id: ""
	I1212 00:35:09.220174  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:09.220239  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:09.223868  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:09.223918  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:09.248862  263844 cri.go:89] found id: ""
	I1212 00:35:09.248885  263844 logs.go:282] 0 containers: []
	W1212 00:35:09.248893  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:09.248900  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:09.248955  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:09.275068  263844 cri.go:89] found id: ""
	I1212 00:35:09.275093  263844 logs.go:282] 0 containers: []
	W1212 00:35:09.275102  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:09.275114  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:09.275127  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:09.366099  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:09.366126  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:09.379543  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:09.379566  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:09.432943  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:09.432958  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:09.432969  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:09.463524  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:09.463547  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:09.490294  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:09.490317  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:09.514799  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:09.514828  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:09.571564  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:09.571589  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:10.788385  300250 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:35:11.057552  300250 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:35:11.057688  300250 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:35:11.225689  300250 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:35:11.681036  300250 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:35:11.823928  300250 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:35:11.882037  300250 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:35:12.127371  300250 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:35:12.128027  300250 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:35:12.132435  300250 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:35:11.703202  302677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:35:11.784049  302677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:35:11.794982  302677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:35:11.808444  302677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:35:11.808518  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.818076  302677 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:35:11.818133  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.826275  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.834017  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.841736  302677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:35:11.848971  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.856669  302677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.868903  302677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:11.876935  302677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:35:11.883730  302677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:35:11.890166  302677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:11.971228  302677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:35:12.107537  302677 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:35:12.107601  302677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:35:12.111902  302677 start.go:564] Will wait 60s for crictl version
	I1212 00:35:12.111953  302677 ssh_runner.go:195] Run: which crictl
	I1212 00:35:12.115979  302677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:35:12.143022  302677 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:35:12.143092  302677 ssh_runner.go:195] Run: crio --version
	I1212 00:35:12.172766  302677 ssh_runner.go:195] Run: crio --version
	I1212 00:35:12.214057  302677 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 00:35:12.215182  302677 cli_runner.go:164] Run: docker network inspect newest-cni-821472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:35:12.234840  302677 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 00:35:12.238650  302677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:12.250364  302677 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 00:35:12.251584  302677 kubeadm.go:884] updating cluster {Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:35:12.251750  302677 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:12.251814  302677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:12.286886  302677 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:12.286909  302677 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:35:12.286960  302677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:12.312952  302677 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:12.312977  302677 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:35:12.312986  302677 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 00:35:12.313097  302677 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-821472 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:35:12.313180  302677 ssh_runner.go:195] Run: crio config
	I1212 00:35:12.374192  302677 cni.go:84] Creating CNI manager for ""
	I1212 00:35:12.374218  302677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:12.374237  302677 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 00:35:12.374270  302677 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-821472 NodeName:newest-cni-821472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:35:12.374434  302677 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-821472"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:35:12.374535  302677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:35:12.383578  302677 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:35:12.383637  302677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:35:12.391434  302677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:35:12.404377  302677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:35:12.423679  302677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1212 00:35:12.436660  302677 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:35:12.440733  302677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:12.451195  302677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:12.537285  302677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:12.559799  302677 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472 for IP: 192.168.76.2
	I1212 00:35:12.559818  302677 certs.go:195] generating shared ca certs ...
	I1212 00:35:12.559838  302677 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.559992  302677 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:35:12.560043  302677 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:35:12.560055  302677 certs.go:257] generating profile certs ...
	I1212 00:35:12.560117  302677 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.key
	I1212 00:35:12.560142  302677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.crt with IP's: []
	I1212 00:35:12.605767  302677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.crt ...
	I1212 00:35:12.605794  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.crt: {Name:mk62a438d5b5213a1e604f2aad5a254998a9c462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.605974  302677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.key ...
	I1212 00:35:12.605988  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.key: {Name:mk63a3fc23e864057dcaa9c8effbe724759615bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.606086  302677 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key.e08375e0
	I1212 00:35:12.606104  302677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt.e08375e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1212 00:35:12.656429  302677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt.e08375e0 ...
	I1212 00:35:12.656451  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt.e08375e0: {Name:mkcaca46142fb6d4be74e9883db090ebf7e5cf3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.656597  302677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key.e08375e0 ...
	I1212 00:35:12.656613  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key.e08375e0: {Name:mkf4411e99bacc1b752d816d27a1043dcd50d436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.656687  302677 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt.e08375e0 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt
	I1212 00:35:12.656757  302677 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key.e08375e0 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key
	I1212 00:35:12.656810  302677 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key
	I1212 00:35:12.656825  302677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.crt with IP's: []
	I1212 00:35:12.794384  302677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.crt ...
	I1212 00:35:12.794408  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.crt: {Name:mk0075073046a87e6d2960d42dc638b72f046c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.794577  302677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key ...
	I1212 00:35:12.794595  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key: {Name:mk204e192392804e3969ede81a4f299490ab4215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:12.794762  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:35:12.794798  302677 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:35:12.794808  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:35:12.794839  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:35:12.794875  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:35:12.794899  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:35:12.794953  302677 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:12.795698  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:35:12.813014  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:35:12.829220  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:35:12.845778  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:35:12.861807  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:35:12.878084  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:35:12.895917  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:35:12.912533  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:35:12.928436  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:35:12.946171  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:35:12.962496  302677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:35:12.978793  302677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:35:12.990605  302677 ssh_runner.go:195] Run: openssl version
	I1212 00:35:12.996246  302677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:35:13.003391  302677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:35:13.010336  302677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:35:13.013787  302677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:35:13.013843  302677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:35:13.047425  302677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:35:13.054597  302677 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:35:13.061519  302677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:35:13.068422  302677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:35:13.075772  302677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:35:13.079428  302677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:35:13.079485  302677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:35:13.113308  302677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:13.120292  302677 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:13.127088  302677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:13.133810  302677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:35:13.140741  302677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:13.144153  302677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:13.144193  302677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:13.177492  302677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:35:13.184429  302677 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:35:13.191968  302677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:35:13.196429  302677 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:35:13.196488  302677 kubeadm.go:401] StartCluster: {Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:13.196559  302677 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:35:13.196600  302677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:35:13.228866  302677 cri.go:89] found id: ""
	I1212 00:35:13.228944  302677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:35:13.238592  302677 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:35:13.246897  302677 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:35:13.247000  302677 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:35:13.254688  302677 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:35:13.254703  302677 kubeadm.go:158] found existing configuration files:
	
	I1212 00:35:13.254736  302677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:35:13.263097  302677 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:35:13.263270  302677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:35:13.273080  302677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:35:13.282211  302677 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:35:13.282268  302677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:35:13.291221  302677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:35:13.302552  302677 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:35:13.302604  302677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:35:13.313834  302677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:35:13.325420  302677 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:35:13.325564  302677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:35:13.336740  302677 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:35:13.383615  302677 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:35:13.383902  302677 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:35:13.451434  302677 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:35:13.451591  302677 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 00:35:13.451650  302677 kubeadm.go:319] OS: Linux
	I1212 00:35:13.451744  302677 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:35:13.451811  302677 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:35:13.451890  302677 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:35:13.451953  302677 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:35:13.452042  302677 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:35:13.452118  302677 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:35:13.452187  302677 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:35:13.452283  302677 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 00:35:13.513534  302677 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:35:13.513688  302677 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:35:13.513825  302677 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:35:13.521572  302677 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:35:13.524580  302677 out.go:252]   - Generating certificates and keys ...
	I1212 00:35:13.524672  302677 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:35:13.524815  302677 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:35:13.627518  302677 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:35:13.704108  302677 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:35:13.773103  302677 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:35:13.794355  302677 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:35:13.913209  302677 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:35:13.913398  302677 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-821472] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 00:35:13.947286  302677 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:35:13.947435  302677 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-821472] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 00:35:14.159651  302677 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:35:14.216650  302677 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:35:14.472885  302677 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:35:14.473061  302677 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:35:14.633454  302677 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:35:14.753641  302677 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:35:14.814612  302677 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:35:14.895913  302677 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:35:14.914803  302677 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:35:14.915315  302677 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:35:14.919969  302677 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:35:12.133773  300250 out.go:252]   - Booting up control plane ...
	I1212 00:35:12.133887  300250 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:35:12.134001  300250 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:35:12.135036  300250 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:35:12.151772  300250 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:35:12.151914  300250 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:35:12.159834  300250 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:35:12.160143  300250 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:35:12.160211  300250 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:35:12.271327  300250 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:35:12.271562  300250 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:35:12.773136  300250 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.727313ms
	I1212 00:35:12.778862  300250 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:35:12.778987  300250 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1212 00:35:12.779121  300250 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:35:12.779237  300250 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:35:14.491599  300250 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.712585121s
	I1212 00:35:15.317576  300250 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.53833817s
	I1212 00:35:14.921519  302677 out.go:252]   - Booting up control plane ...
	I1212 00:35:14.921651  302677 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:35:14.924749  302677 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:35:14.924842  302677 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:35:14.943271  302677 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:35:14.943423  302677 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:35:14.949795  302677 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:35:14.950158  302677 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:35:14.950225  302677 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:35:15.059243  302677 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:35:15.059468  302677 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:35:15.560643  302677 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.54438ms
	I1212 00:35:15.564093  302677 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:35:15.564228  302677 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1212 00:35:15.564345  302677 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:35:15.564457  302677 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:35:16.569708  302677 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005463869s
	I1212 00:35:12.100744  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:12.101121  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:12.101177  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:12.101233  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:12.130434  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:12.130457  263844 cri.go:89] found id: ""
	I1212 00:35:12.130467  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:12.130537  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:12.134919  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:12.134991  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:12.165688  263844 cri.go:89] found id: ""
	I1212 00:35:12.165712  263844 logs.go:282] 0 containers: []
	W1212 00:35:12.165723  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:12.165735  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:12.165800  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:12.200067  263844 cri.go:89] found id: ""
	I1212 00:35:12.200095  263844 logs.go:282] 0 containers: []
	W1212 00:35:12.200105  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:12.200114  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:12.200175  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:12.229100  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:12.229122  263844 cri.go:89] found id: ""
	I1212 00:35:12.229132  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:12.229188  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:12.233550  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:12.233611  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:12.263168  263844 cri.go:89] found id: ""
	I1212 00:35:12.263191  263844 logs.go:282] 0 containers: []
	W1212 00:35:12.263197  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:12.263203  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:12.263249  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:12.290993  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:12.291011  263844 cri.go:89] found id: ""
	I1212 00:35:12.291020  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:12.291082  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:12.294886  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:12.294950  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:12.324024  263844 cri.go:89] found id: ""
	I1212 00:35:12.324047  263844 logs.go:282] 0 containers: []
	W1212 00:35:12.324056  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:12.324064  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:12.324126  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:12.351428  263844 cri.go:89] found id: ""
	I1212 00:35:12.351452  263844 logs.go:282] 0 containers: []
	W1212 00:35:12.351462  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:12.351498  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:12.351516  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:12.385548  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:12.385577  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:12.416157  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:12.416182  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:12.442624  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:12.442646  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:12.502981  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:12.503012  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:12.532856  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:12.532883  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:12.653112  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:12.653143  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:12.668189  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:12.668212  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:12.724901  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:15.225542  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:15.225965  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:15.226024  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:15.226088  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:15.254653  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:15.254676  263844 cri.go:89] found id: ""
	I1212 00:35:15.254685  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:15.254771  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:15.260402  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:15.260491  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:15.305137  263844 cri.go:89] found id: ""
	I1212 00:35:15.305160  263844 logs.go:282] 0 containers: []
	W1212 00:35:15.305171  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:15.305178  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:15.305245  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:15.333505  263844 cri.go:89] found id: ""
	I1212 00:35:15.333538  263844 logs.go:282] 0 containers: []
	W1212 00:35:15.333552  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:15.333561  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:15.333614  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:15.361356  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:15.361380  263844 cri.go:89] found id: ""
	I1212 00:35:15.361389  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:15.361453  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:15.365357  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:15.365419  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:15.396683  263844 cri.go:89] found id: ""
	I1212 00:35:15.396704  263844 logs.go:282] 0 containers: []
	W1212 00:35:15.396711  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:15.396717  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:15.396773  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:15.429111  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:15.429152  263844 cri.go:89] found id: ""
	I1212 00:35:15.429163  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:15.429219  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:15.433130  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:15.433183  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:15.458694  263844 cri.go:89] found id: ""
	I1212 00:35:15.458722  263844 logs.go:282] 0 containers: []
	W1212 00:35:15.458732  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:15.458740  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:15.458801  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:15.502451  263844 cri.go:89] found id: ""
	I1212 00:35:15.502498  263844 logs.go:282] 0 containers: []
	W1212 00:35:15.502535  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:15.502550  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:15.502570  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:15.533938  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:15.533965  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:15.561307  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:15.561331  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:15.613786  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:15.613818  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:15.643661  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:15.643695  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:15.720562  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:15.720586  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:15.734392  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:15.734426  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:15.787130  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:15.787150  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:15.787162  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:17.280767  300250 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501711696s
	I1212 00:35:17.298890  300250 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:35:17.306798  300250 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:35:17.316204  300250 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:35:17.316535  300250 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-079970 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:35:17.324332  300250 kubeadm.go:319] [bootstrap-token] Using token: vzntbo.0n3hslrivx4nbk6h
	I1212 00:35:17.325581  300250 out.go:252]   - Configuring RBAC rules ...
	I1212 00:35:17.325727  300250 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:35:17.328649  300250 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:35:17.333384  300250 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:35:17.335765  300250 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:35:17.338603  300250 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:35:17.340971  300250 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:35:17.686781  300250 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:35:18.102084  300250 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 00:35:18.687720  300250 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 00:35:18.689043  300250 kubeadm.go:319] 
	I1212 00:35:18.689133  300250 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 00:35:18.689152  300250 kubeadm.go:319] 
	I1212 00:35:18.689249  300250 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 00:35:18.689270  300250 kubeadm.go:319] 
	I1212 00:35:18.689313  300250 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 00:35:18.689387  300250 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:35:18.689456  300250 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:35:18.689466  300250 kubeadm.go:319] 
	I1212 00:35:18.689568  300250 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 00:35:18.689580  300250 kubeadm.go:319] 
	I1212 00:35:18.689626  300250 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:35:18.689634  300250 kubeadm.go:319] 
	I1212 00:35:18.689709  300250 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 00:35:18.689831  300250 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:35:18.689940  300250 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:35:18.689950  300250 kubeadm.go:319] 
	I1212 00:35:18.690061  300250 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:35:18.690186  300250 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 00:35:18.690195  300250 kubeadm.go:319] 
	I1212 00:35:18.690314  300250 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token vzntbo.0n3hslrivx4nbk6h \
	I1212 00:35:18.690496  300250 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f \
	I1212 00:35:18.690545  300250 kubeadm.go:319] 	--control-plane 
	I1212 00:35:18.690563  300250 kubeadm.go:319] 
	I1212 00:35:18.690686  300250 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:35:18.690696  300250 kubeadm.go:319] 
	I1212 00:35:18.690812  300250 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token vzntbo.0n3hslrivx4nbk6h \
	I1212 00:35:18.690954  300250 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f 
	I1212 00:35:18.694339  300250 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 00:35:18.694550  300250 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:35:18.694567  300250 cni.go:84] Creating CNI manager for ""
	I1212 00:35:18.694576  300250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:18.696703  300250 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 00:35:17.160850  302677 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.596679207s
	I1212 00:35:19.066502  302677 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502324833s
	I1212 00:35:19.085057  302677 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:35:19.096897  302677 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:35:19.107177  302677 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:35:19.107466  302677 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-821472 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:35:19.115974  302677 kubeadm.go:319] [bootstrap-token] Using token: 1hw5s9.frvasufed6x8ofpi
	I1212 00:35:19.117834  302677 out.go:252]   - Configuring RBAC rules ...
	I1212 00:35:19.117975  302677 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:35:19.120835  302677 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:35:19.125865  302677 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:35:19.127972  302677 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:35:19.130202  302677 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:35:19.132498  302677 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:35:19.472755  302677 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:35:19.889183  302677 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 00:35:20.472283  302677 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 00:35:20.473228  302677 kubeadm.go:319] 
	I1212 00:35:20.473347  302677 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 00:35:20.473382  302677 kubeadm.go:319] 
	I1212 00:35:20.473514  302677 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 00:35:20.473529  302677 kubeadm.go:319] 
	I1212 00:35:20.473564  302677 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 00:35:20.473653  302677 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:35:20.473726  302677 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:35:20.473742  302677 kubeadm.go:319] 
	I1212 00:35:20.473830  302677 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 00:35:20.473840  302677 kubeadm.go:319] 
	I1212 00:35:20.473908  302677 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:35:20.473917  302677 kubeadm.go:319] 
	I1212 00:35:20.473990  302677 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 00:35:20.474112  302677 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:35:20.474211  302677 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:35:20.474220  302677 kubeadm.go:319] 
	I1212 00:35:20.474319  302677 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:35:20.474421  302677 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 00:35:20.474431  302677 kubeadm.go:319] 
	I1212 00:35:20.474566  302677 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1hw5s9.frvasufed6x8ofpi \
	I1212 00:35:20.474716  302677 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f \
	I1212 00:35:20.474777  302677 kubeadm.go:319] 	--control-plane 
	I1212 00:35:20.474785  302677 kubeadm.go:319] 
	I1212 00:35:20.474895  302677 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:35:20.474908  302677 kubeadm.go:319] 
	I1212 00:35:20.475015  302677 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1hw5s9.frvasufed6x8ofpi \
	I1212 00:35:20.475137  302677 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f 
	I1212 00:35:20.477381  302677 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 00:35:20.477537  302677 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:35:20.477568  302677 cni.go:84] Creating CNI manager for ""
	I1212 00:35:20.477580  302677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:20.479065  302677 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 00:35:18.698118  300250 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:35:18.702449  300250 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 00:35:18.702465  300250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 00:35:18.716877  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:35:18.969312  300250 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:35:18.969392  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:18.969443  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-079970 minikube.k8s.io/updated_at=2025_12_12T00_35_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=default-k8s-diff-port-079970 minikube.k8s.io/primary=true
	I1212 00:35:18.980253  300250 ops.go:34] apiserver oom_adj: -16
	I1212 00:35:19.044290  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:19.544400  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:20.044663  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:20.544656  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:20.480077  302677 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:35:20.484251  302677 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1212 00:35:20.484270  302677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 00:35:20.498227  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:35:20.722785  302677 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:35:20.722859  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:20.722863  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-821472 minikube.k8s.io/updated_at=2025_12_12T00_35_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=newest-cni-821472 minikube.k8s.io/primary=true
	I1212 00:35:20.792681  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:20.792694  302677 ops.go:34] apiserver oom_adj: -16
	I1212 00:35:21.293770  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:18.318161  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:18.318666  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:18.318729  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:18.318786  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:18.352544  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:18.352567  263844 cri.go:89] found id: ""
	I1212 00:35:18.352577  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:18.352636  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:18.357312  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:18.357378  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:18.386881  263844 cri.go:89] found id: ""
	I1212 00:35:18.386902  263844 logs.go:282] 0 containers: []
	W1212 00:35:18.386912  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:18.386919  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:18.386973  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:18.417064  263844 cri.go:89] found id: ""
	I1212 00:35:18.417088  263844 logs.go:282] 0 containers: []
	W1212 00:35:18.417099  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:18.417107  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:18.417167  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:18.447543  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:18.447568  263844 cri.go:89] found id: ""
	I1212 00:35:18.447578  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:18.447647  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:18.452360  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:18.452420  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:18.481882  263844 cri.go:89] found id: ""
	I1212 00:35:18.481911  263844 logs.go:282] 0 containers: []
	W1212 00:35:18.481923  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:18.481931  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:18.481984  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:18.522675  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:18.522698  263844 cri.go:89] found id: ""
	I1212 00:35:18.522707  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:18.522770  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:18.527549  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:18.527622  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:18.557639  263844 cri.go:89] found id: ""
	I1212 00:35:18.557663  263844 logs.go:282] 0 containers: []
	W1212 00:35:18.557673  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:18.557680  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:18.557741  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:18.587529  263844 cri.go:89] found id: ""
	I1212 00:35:18.587558  263844 logs.go:282] 0 containers: []
	W1212 00:35:18.587568  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:18.587582  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:18.587600  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:18.617911  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:18.617944  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:18.677434  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:18.677462  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:18.710192  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:18.710220  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:18.813216  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:18.813247  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:18.829843  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:18.829876  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:18.895288  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:18.895311  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:18.895326  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:18.928602  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:18.928632  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:21.468717  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:21.469075  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:21.469120  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:21.469168  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:21.494583  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:21.494599  263844 cri.go:89] found id: ""
	I1212 00:35:21.494607  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:21.494665  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:21.498756  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:21.498842  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:21.524669  263844 cri.go:89] found id: ""
	I1212 00:35:21.524694  263844 logs.go:282] 0 containers: []
	W1212 00:35:21.524704  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:21.524710  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:21.524752  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:21.550501  263844 cri.go:89] found id: ""
	I1212 00:35:21.550525  263844 logs.go:282] 0 containers: []
	W1212 00:35:21.550537  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:21.550544  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:21.550599  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:21.576757  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:21.576778  263844 cri.go:89] found id: ""
	I1212 00:35:21.576787  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:21.576840  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:21.580808  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:21.580869  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:21.610338  263844 cri.go:89] found id: ""
	I1212 00:35:21.610365  263844 logs.go:282] 0 containers: []
	W1212 00:35:21.610380  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:21.610387  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:21.610458  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:21.636138  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:21.636157  263844 cri.go:89] found id: ""
	I1212 00:35:21.636164  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:21.636218  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:21.640049  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:21.640100  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:21.665873  263844 cri.go:89] found id: ""
	I1212 00:35:21.665897  263844 logs.go:282] 0 containers: []
	W1212 00:35:21.665905  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:21.665913  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:21.665980  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:21.692021  263844 cri.go:89] found id: ""
	I1212 00:35:21.692046  263844 logs.go:282] 0 containers: []
	W1212 00:35:21.692057  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:21.692068  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:21.692082  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:21.783812  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:21.783843  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:21.799265  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:21.799297  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:21.858012  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:21.858034  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:21.858055  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:21.888680  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:21.888707  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:21.914847  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:21.914873  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:21.939973  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:21.939996  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:21.993411  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:21.993438  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:21.044408  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:21.545165  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:22.045161  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:22.544651  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:23.044599  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:23.544880  300250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:23.613988  300250 kubeadm.go:1114] duration metric: took 4.644651054s to wait for elevateKubeSystemPrivileges
	I1212 00:35:23.614027  300250 kubeadm.go:403] duration metric: took 14.971500685s to StartCluster
	I1212 00:35:23.614050  300250 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:23.614149  300250 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:35:23.616371  300250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:23.616639  300250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:35:23.616643  300250 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:35:23.616699  300250 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:35:23.616817  300250 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-079970"
	I1212 00:35:23.616827  300250 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-079970"
	I1212 00:35:23.616837  300250 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-079970"
	I1212 00:35:23.616844  300250 config.go:182] Loaded profile config "default-k8s-diff-port-079970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:35:23.616853  300250 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-079970"
	I1212 00:35:23.616871  300250 host.go:66] Checking if "default-k8s-diff-port-079970" exists ...
	I1212 00:35:23.617208  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Status}}
	I1212 00:35:23.617376  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Status}}
	I1212 00:35:23.620577  300250 out.go:179] * Verifying Kubernetes components...
	I1212 00:35:23.621791  300250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:23.640258  300250 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:35:23.641348  300250 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:35:23.641367  300250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:35:23.641447  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:23.642838  300250 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-079970"
	I1212 00:35:23.642881  300250 host.go:66] Checking if "default-k8s-diff-port-079970" exists ...
	I1212 00:35:23.643360  300250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Status}}
	I1212 00:35:23.666418  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:23.675582  300250 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:35:23.675604  300250 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:35:23.675684  300250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:35:23.703381  300250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:35:23.722052  300250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:35:23.796379  300250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:35:23.797433  300250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:23.826413  300250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:35:23.939621  300250 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1212 00:35:24.216756  300250 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-079970" to be "Ready" ...
	I1212 00:35:24.221321  300250 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 00:35:21.793360  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:22.293579  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:22.793237  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:23.292820  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:23.793548  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:24.292785  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:24.793695  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:25.293358  302677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:25.372217  302677 kubeadm.go:1114] duration metric: took 4.649413584s to wait for elevateKubeSystemPrivileges
	I1212 00:35:25.372259  302677 kubeadm.go:403] duration metric: took 12.175774379s to StartCluster
	I1212 00:35:25.372281  302677 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:25.372350  302677 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:35:25.374667  302677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:25.374915  302677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:35:25.374933  302677 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:35:25.374987  302677 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:35:25.375092  302677 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-821472"
	I1212 00:35:25.375122  302677 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-821472"
	I1212 00:35:25.375149  302677 config.go:182] Loaded profile config "newest-cni-821472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:25.375156  302677 addons.go:70] Setting default-storageclass=true in profile "newest-cni-821472"
	I1212 00:35:25.375158  302677 host.go:66] Checking if "newest-cni-821472" exists ...
	I1212 00:35:25.375204  302677 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-821472"
	I1212 00:35:25.375650  302677 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:25.375831  302677 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:25.379629  302677 out.go:179] * Verifying Kubernetes components...
	I1212 00:35:25.380984  302677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:25.403756  302677 addons.go:239] Setting addon default-storageclass=true in "newest-cni-821472"
	I1212 00:35:25.403814  302677 host.go:66] Checking if "newest-cni-821472" exists ...
	I1212 00:35:25.404809  302677 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:35:24.222304  300250 addons.go:530] duration metric: took 605.606576ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 00:35:24.444542  300250 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-079970" context rescaled to 1 replicas
	I1212 00:35:25.405223  302677 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:25.406101  302677 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:35:25.406121  302677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:35:25.406185  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:25.433904  302677 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:35:25.433925  302677 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:35:25.433989  302677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:25.443785  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:25.458192  302677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:25.482550  302677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:35:25.555945  302677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:25.570367  302677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:35:25.577516  302677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:35:25.704623  302677 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1212 00:35:25.706409  302677 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:35:25.706470  302677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:35:25.920353  302677 api_server.go:72] duration metric: took 545.38908ms to wait for apiserver process to appear ...
	I1212 00:35:25.920427  302677 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:35:25.920454  302677 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:25.925800  302677 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 00:35:25.926499  302677 api_server.go:141] control plane version: v1.35.0-beta.0
	I1212 00:35:25.926521  302677 api_server.go:131] duration metric: took 6.083042ms to wait for apiserver health ...
	I1212 00:35:25.926533  302677 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:35:25.929054  302677 system_pods.go:59] 8 kube-system pods found
	I1212 00:35:25.929084  302677 system_pods.go:61] "coredns-7d764666f9-jh7k7" [47b3a0d4-8cf1-493d-8476-854bf16da9c2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1212 00:35:25.929091  302677 system_pods.go:61] "etcd-newest-cni-821472" [873a9831-a5b5-4c30-ab0d-03b2d4f01bc9] Running
	I1212 00:35:25.929095  302677 system_pods.go:61] "kindnet-j79t9" [d76b2dd5-9a77-4340-8bbf-9c37dbb875ed] Running
	I1212 00:35:25.929099  302677 system_pods.go:61] "kube-apiserver-newest-cni-821472" [f133af68-91ae-4346-a167-9b8a88347f18] Running
	I1212 00:35:25.929103  302677 system_pods.go:61] "kube-controller-manager-newest-cni-821472" [549c410e-aef5-4f29-b928-488385df0998] Running
	I1212 00:35:25.929107  302677 system_pods.go:61] "kube-proxy-9kt8x" [5f73abae-7ab2-4110-a5e8-3623cf25bab2] Running
	I1212 00:35:25.929114  302677 system_pods.go:61] "kube-scheduler-newest-cni-821472" [4daba7f7-0db4-44d6-b143-0d9dba4b5048] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:35:25.929123  302677 system_pods.go:61] "storage-provisioner" [cd0e3704-d2bd-42bc-b3fb-5da6006b6e6d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1212 00:35:25.929129  302677 system_pods.go:74] duration metric: took 2.586915ms to wait for pod list to return data ...
	I1212 00:35:25.929137  302677 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:35:25.929177  302677 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 00:35:25.930507  302677 addons.go:530] duration metric: took 555.521815ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 00:35:25.931620  302677 default_sa.go:45] found service account: "default"
	I1212 00:35:25.931638  302677 default_sa.go:55] duration metric: took 2.49644ms for default service account to be created ...
	I1212 00:35:25.931648  302677 kubeadm.go:587] duration metric: took 556.688289ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 00:35:25.931661  302677 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:35:25.933764  302677 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:35:25.933784  302677 node_conditions.go:123] node cpu capacity is 8
	I1212 00:35:25.933796  302677 node_conditions.go:105] duration metric: took 2.130971ms to run NodePressure ...
	I1212 00:35:25.933806  302677 start.go:242] waiting for startup goroutines ...
	I1212 00:35:26.210157  302677 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-821472" context rescaled to 1 replicas
	I1212 00:35:26.210207  302677 start.go:247] waiting for cluster config update ...
	I1212 00:35:26.210230  302677 start.go:256] writing updated cluster config ...
	I1212 00:35:26.210630  302677 ssh_runner.go:195] Run: rm -f paused
	I1212 00:35:26.272895  302677 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1212 00:35:26.275044  302677 out.go:179] * Done! kubectl is now configured to use "newest-cni-821472" cluster and "default" namespace by default
	I1212 00:35:24.524813  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:24.525262  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 00:35:24.525316  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:24.525378  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:24.555364  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:24.555389  263844 cri.go:89] found id: ""
	I1212 00:35:24.555399  263844 logs.go:282] 1 containers: [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:24.555458  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:24.559674  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:24.559740  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:24.588638  263844 cri.go:89] found id: ""
	I1212 00:35:24.588660  263844 logs.go:282] 0 containers: []
	W1212 00:35:24.588669  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:24.588676  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:24.588728  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:24.619265  263844 cri.go:89] found id: ""
	I1212 00:35:24.619293  263844 logs.go:282] 0 containers: []
	W1212 00:35:24.619302  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:24.619308  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:24.619354  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:24.645774  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:24.645793  263844 cri.go:89] found id: ""
	I1212 00:35:24.645800  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:24.645860  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:24.650021  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:24.650085  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:24.676810  263844 cri.go:89] found id: ""
	I1212 00:35:24.676835  263844 logs.go:282] 0 containers: []
	W1212 00:35:24.676844  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:24.676851  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:24.676922  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:24.704202  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:24.704225  263844 cri.go:89] found id: ""
	I1212 00:35:24.704239  263844 logs.go:282] 1 containers: [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:24.704295  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:24.708177  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:24.708238  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:24.739195  263844 cri.go:89] found id: ""
	I1212 00:35:24.739218  263844 logs.go:282] 0 containers: []
	W1212 00:35:24.739227  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:24.739235  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:24.739292  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:24.775920  263844 cri.go:89] found id: ""
	I1212 00:35:24.775945  263844 logs.go:282] 0 containers: []
	W1212 00:35:24.775955  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:24.775966  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:24.775978  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:24.881124  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:24.881152  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:24.895934  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:24.895957  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:24.965076  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:24.965097  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:24.965112  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:24.999037  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:24.999074  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:25.033074  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:25.033111  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:25.066403  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:25.066435  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:25.131663  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:25.131695  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.477232183Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.481789384Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d6766367-6155-4f16-a01e-9f0b218cfc7b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.482710226Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5fe36208-4ca2-4d93-b0b9-175f52ed6abe name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.483433663Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.484349308Z" level=info msg="Ran pod sandbox fa800cb1bdafc09d982fb5b12e17454a8c1f83b525d782e9cb2f8450f7591983 with infra container: kube-system/kube-proxy-9kt8x/POD" id=d6766367-6155-4f16-a01e-9f0b218cfc7b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.484816327Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.485605417Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b502de6b-2713-4812-84a8-987358141225 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.487295196Z" level=info msg="Ran pod sandbox 313a38e3a9fc5e65dd4b7323063c59cc1c4888cf33ac49c137c9d2011a480891 with infra container: kube-system/kindnet-j79t9/POD" id=5fe36208-4ca2-4d93-b0b9-175f52ed6abe name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.489033177Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d6cde882-1271-4adf-b707-77988a85e052 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.48925384Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b3f0b9cb-216c-4edd-a9cd-f84f86ca7494 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.492361178Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=24af9636-4a83-404a-80b8-02c4b1f643fd name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.497222697Z" level=info msg="Creating container: kube-system/kube-proxy-9kt8x/kube-proxy" id=fd40c23d-ab76-44a2-a2e3-83b6e6cfdf8e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.497356742Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.497710552Z" level=info msg="Creating container: kube-system/kindnet-j79t9/kindnet-cni" id=b3bae042-d0ab-409c-88c6-64f1e1304201 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.497788777Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.504657775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.505288433Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.505533809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.506039583Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.602865696Z" level=info msg="Created container 8b049a19204fe6930eb7c71e6fd1a809c84b5acea150e034f0b67df1f54e0202: kube-system/kindnet-j79t9/kindnet-cni" id=b3bae042-d0ab-409c-88c6-64f1e1304201 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.604113297Z" level=info msg="Starting container: 8b049a19204fe6930eb7c71e6fd1a809c84b5acea150e034f0b67df1f54e0202" id=00be7d76-cd74-402a-858b-7bea2f715cc2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.607065185Z" level=info msg="Started container" PID=1585 containerID=8b049a19204fe6930eb7c71e6fd1a809c84b5acea150e034f0b67df1f54e0202 description=kube-system/kindnet-j79t9/kindnet-cni id=00be7d76-cd74-402a-858b-7bea2f715cc2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=313a38e3a9fc5e65dd4b7323063c59cc1c4888cf33ac49c137c9d2011a480891
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.609488265Z" level=info msg="Created container 9a62dfac112f4d1dc9440f5d1ae2b326f8aaad1724c5027eeac7ec1e056f69ad: kube-system/kube-proxy-9kt8x/kube-proxy" id=fd40c23d-ab76-44a2-a2e3-83b6e6cfdf8e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.610427459Z" level=info msg="Starting container: 9a62dfac112f4d1dc9440f5d1ae2b326f8aaad1724c5027eeac7ec1e056f69ad" id=8e98d913-2075-4420-8d7b-b3770112c61d name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:35:25 newest-cni-821472 crio[779]: time="2025-12-12T00:35:25.614414922Z" level=info msg="Started container" PID=1588 containerID=9a62dfac112f4d1dc9440f5d1ae2b326f8aaad1724c5027eeac7ec1e056f69ad description=kube-system/kube-proxy-9kt8x/kube-proxy id=8e98d913-2075-4420-8d7b-b3770112c61d name=/runtime.v1.RuntimeService/StartContainer sandboxID=fa800cb1bdafc09d982fb5b12e17454a8c1f83b525d782e9cb2f8450f7591983
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8b049a19204fe       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   313a38e3a9fc5       kindnet-j79t9                               kube-system
	9a62dfac112f4       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   2 seconds ago       Running             kube-proxy                0                   fa800cb1bdafc       kube-proxy-9kt8x                            kube-system
	022307fa30d9b       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   11 seconds ago      Running             etcd                      0                   c87aeaef3d056       etcd-newest-cni-821472                      kube-system
	b35172584e1bc       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   11 seconds ago      Running             kube-controller-manager   0                   1fa79e9b78a5a       kube-controller-manager-newest-cni-821472   kube-system
	9eadd3a869210       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   11 seconds ago      Running             kube-apiserver            0                   499e44fe84f25       kube-apiserver-newest-cni-821472            kube-system
	8221eb3641eff       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   11 seconds ago      Running             kube-scheduler            0                   3065da10a416f       kube-scheduler-newest-cni-821472            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-821472
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-821472
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=newest-cni-821472
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_35_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:35:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-821472
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:35:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:35:19 +0000   Fri, 12 Dec 2025 00:35:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:35:19 +0000   Fri, 12 Dec 2025 00:35:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:35:19 +0000   Fri, 12 Dec 2025 00:35:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 12 Dec 2025 00:35:19 +0000   Fri, 12 Dec 2025 00:35:16 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-821472
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                aee33282-724a-47cd-8807-62e94d0c0413
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-821472                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-j79t9                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-821472             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-821472    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-9kt8x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-821472             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-821472 event: Registered Node newest-cni-821472 in Controller
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [022307fa30d9bc933386e00cc7afd380ad4833daac1f4abb328b488415722f4e] <==
	{"level":"warn","ts":"2025-12-12T00:35:16.477788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.485245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.495827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.503188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.510607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.518059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.524977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.532895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.539584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.547106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.560606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.568263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.575540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.583579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.590954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.598875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.606328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.613790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.621781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.631857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.639469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.659910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.671333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.678568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:16.728025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54986","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:35:27 up  1:17,  0 user,  load average: 5.06, 3.39, 2.08
	Linux newest-cni-821472 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8b049a19204fe6930eb7c71e6fd1a809c84b5acea150e034f0b67df1f54e0202] <==
	I1212 00:35:25.909227       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 00:35:25.909504       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1212 00:35:25.909676       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:35:25.909699       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:35:25.909726       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:35:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:35:26.109417       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:35:26.109447       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:35:26.109459       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:35:26.206013       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 00:35:26.605740       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:35:26.605835       1 metrics.go:72] Registering metrics
	I1212 00:35:26.605949       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [9eadd3a8692109ab5afea4deb7a324ebf1b08a0d5d32e61b0f53e0f3003883a2] <==
	I1212 00:35:17.213893       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 00:35:17.216127       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:35:17.217762       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1212 00:35:17.219240       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1212 00:35:17.219275       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 00:35:17.219288       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 00:35:17.222377       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:35:17.401741       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:35:18.113862       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1212 00:35:18.117500       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1212 00:35:18.117515       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1212 00:35:18.566676       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:35:18.603315       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:35:18.718527       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 00:35:18.724417       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1212 00:35:18.725705       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 00:35:18.731111       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:35:19.150755       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:35:19.879943       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:35:19.888380       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 00:35:19.895767       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 00:35:24.705731       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 00:35:24.860372       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:35:24.865730       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:35:25.153357       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [b35172584e1bcae5ff4effb9cc303de9287fd5ee013a73f7d265e4fa52873b2f] <==
	I1212 00:35:23.976604       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.976639       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.977020       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.979318       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.979401       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.978578       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.978594       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.978734       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.978793       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.979451       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.977543       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:35:23.979659       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.979673       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.979678       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.978557       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.985028       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.985208       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.985266       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.987681       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.989903       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:23.998319       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-821472" podCIDRs=["10.42.0.0/24"]
	I1212 00:35:24.079515       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:24.079540       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1212 00:35:24.079566       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1212 00:35:24.082185       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [9a62dfac112f4d1dc9440f5d1ae2b326f8aaad1724c5027eeac7ec1e056f69ad] <==
	I1212 00:35:25.665645       1 server_linux.go:53] "Using iptables proxy"
	I1212 00:35:25.730604       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:35:25.831580       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:25.831615       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1212 00:35:25.831719       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:35:25.851365       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:35:25.851464       1 server_linux.go:136] "Using iptables Proxier"
	I1212 00:35:25.857130       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:35:25.857576       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1212 00:35:25.857623       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:35:25.860225       1 config.go:200] "Starting service config controller"
	I1212 00:35:25.860289       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:35:25.860356       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:35:25.860363       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:35:25.860379       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:35:25.860384       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:35:25.861108       1 config.go:309] "Starting node config controller"
	I1212 00:35:25.861160       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:35:25.861187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 00:35:25.961090       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 00:35:25.961128       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 00:35:25.961131       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8221eb3641eff8b2ee3da2fa27e1ecd27116c845e85de1d1281cf09edac78fbb] <==
	E1212 00:35:18.001240       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1212 00:35:18.002450       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1212 00:35:18.091229       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1212 00:35:18.092307       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1212 00:35:18.127385       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1212 00:35:18.128351       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1212 00:35:18.157799       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1212 00:35:18.158810       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1212 00:35:18.173800       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1212 00:35:18.174580       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1212 00:35:18.202494       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:35:18.203462       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1212 00:35:18.252558       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1212 00:35:18.253543       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1212 00:35:18.264845       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1212 00:35:18.265777       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1212 00:35:18.316606       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:35:18.317592       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1212 00:35:18.349086       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:35:18.349087       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:35:18.350117       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1212 00:35:18.350290       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1212 00:35:18.369890       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1212 00:35:18.370965       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	I1212 00:35:20.656681       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 12 00:35:20 newest-cni-821472 kubelet[1309]: E1212 00:35:20.740699    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-821472" containerName="kube-controller-manager"
	Dec 12 00:35:21 newest-cni-821472 kubelet[1309]: E1212 00:35:21.732870    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-821472" containerName="kube-scheduler"
	Dec 12 00:35:21 newest-cni-821472 kubelet[1309]: E1212 00:35:21.732975    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-821472" containerName="kube-controller-manager"
	Dec 12 00:35:21 newest-cni-821472 kubelet[1309]: E1212 00:35:21.733101    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-821472" containerName="etcd"
	Dec 12 00:35:21 newest-cni-821472 kubelet[1309]: E1212 00:35:21.733216    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-821472" containerName="kube-apiserver"
	Dec 12 00:35:21 newest-cni-821472 kubelet[1309]: I1212 00:35:21.764155    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-821472" podStartSLOduration=2.764128334 podStartE2EDuration="2.764128334s" podCreationTimestamp="2025-12-12 00:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:35:21.755631607 +0000 UTC m=+2.121278655" watchObservedRunningTime="2025-12-12 00:35:21.764128334 +0000 UTC m=+2.129775379"
	Dec 12 00:35:21 newest-cni-821472 kubelet[1309]: I1212 00:35:21.764297    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-821472" podStartSLOduration=2.764292153 podStartE2EDuration="2.764292153s" podCreationTimestamp="2025-12-12 00:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:35:21.764245439 +0000 UTC m=+2.129892481" watchObservedRunningTime="2025-12-12 00:35:21.764292153 +0000 UTC m=+2.129939200"
	Dec 12 00:35:21 newest-cni-821472 kubelet[1309]: I1212 00:35:21.780044    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-821472" podStartSLOduration=2.780030628 podStartE2EDuration="2.780030628s" podCreationTimestamp="2025-12-12 00:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:35:21.771745763 +0000 UTC m=+2.137392806" watchObservedRunningTime="2025-12-12 00:35:21.780030628 +0000 UTC m=+2.145677677"
	Dec 12 00:35:21 newest-cni-821472 kubelet[1309]: I1212 00:35:21.780139    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-821472" podStartSLOduration=3.780135527 podStartE2EDuration="3.780135527s" podCreationTimestamp="2025-12-12 00:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:35:21.78001928 +0000 UTC m=+2.145666328" watchObservedRunningTime="2025-12-12 00:35:21.780135527 +0000 UTC m=+2.145782575"
	Dec 12 00:35:22 newest-cni-821472 kubelet[1309]: E1212 00:35:22.734752    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-821472" containerName="kube-scheduler"
	Dec 12 00:35:24 newest-cni-821472 kubelet[1309]: I1212 00:35:24.030168    1309 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 12 00:35:24 newest-cni-821472 kubelet[1309]: I1212 00:35:24.031152    1309 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 12 00:35:24 newest-cni-821472 kubelet[1309]: E1212 00:35:24.753456    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-821472" containerName="kube-scheduler"
	Dec 12 00:35:25 newest-cni-821472 kubelet[1309]: I1212 00:35:25.239201    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbtbn\" (UniqueName: \"kubernetes.io/projected/d76b2dd5-9a77-4340-8bbf-9c37dbb875ed-kube-api-access-pbtbn\") pod \"kindnet-j79t9\" (UID: \"d76b2dd5-9a77-4340-8bbf-9c37dbb875ed\") " pod="kube-system/kindnet-j79t9"
	Dec 12 00:35:25 newest-cni-821472 kubelet[1309]: I1212 00:35:25.239251    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25mqp\" (UniqueName: \"kubernetes.io/projected/5f73abae-7ab2-4110-a5e8-3623cf25bab2-kube-api-access-25mqp\") pod \"kube-proxy-9kt8x\" (UID: \"5f73abae-7ab2-4110-a5e8-3623cf25bab2\") " pod="kube-system/kube-proxy-9kt8x"
	Dec 12 00:35:25 newest-cni-821472 kubelet[1309]: I1212 00:35:25.239283    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f73abae-7ab2-4110-a5e8-3623cf25bab2-kube-proxy\") pod \"kube-proxy-9kt8x\" (UID: \"5f73abae-7ab2-4110-a5e8-3623cf25bab2\") " pod="kube-system/kube-proxy-9kt8x"
	Dec 12 00:35:25 newest-cni-821472 kubelet[1309]: I1212 00:35:25.239303    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f73abae-7ab2-4110-a5e8-3623cf25bab2-lib-modules\") pod \"kube-proxy-9kt8x\" (UID: \"5f73abae-7ab2-4110-a5e8-3623cf25bab2\") " pod="kube-system/kube-proxy-9kt8x"
	Dec 12 00:35:25 newest-cni-821472 kubelet[1309]: I1212 00:35:25.239331    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d76b2dd5-9a77-4340-8bbf-9c37dbb875ed-cni-cfg\") pod \"kindnet-j79t9\" (UID: \"d76b2dd5-9a77-4340-8bbf-9c37dbb875ed\") " pod="kube-system/kindnet-j79t9"
	Dec 12 00:35:25 newest-cni-821472 kubelet[1309]: I1212 00:35:25.239350    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f73abae-7ab2-4110-a5e8-3623cf25bab2-xtables-lock\") pod \"kube-proxy-9kt8x\" (UID: \"5f73abae-7ab2-4110-a5e8-3623cf25bab2\") " pod="kube-system/kube-proxy-9kt8x"
	Dec 12 00:35:25 newest-cni-821472 kubelet[1309]: I1212 00:35:25.239371    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d76b2dd5-9a77-4340-8bbf-9c37dbb875ed-xtables-lock\") pod \"kindnet-j79t9\" (UID: \"d76b2dd5-9a77-4340-8bbf-9c37dbb875ed\") " pod="kube-system/kindnet-j79t9"
	Dec 12 00:35:25 newest-cni-821472 kubelet[1309]: I1212 00:35:25.239397    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d76b2dd5-9a77-4340-8bbf-9c37dbb875ed-lib-modules\") pod \"kindnet-j79t9\" (UID: \"d76b2dd5-9a77-4340-8bbf-9c37dbb875ed\") " pod="kube-system/kindnet-j79t9"
	Dec 12 00:35:25 newest-cni-821472 kubelet[1309]: I1212 00:35:25.779736    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-9kt8x" podStartSLOduration=0.779709156 podStartE2EDuration="779.709156ms" podCreationTimestamp="2025-12-12 00:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:35:25.779557886 +0000 UTC m=+6.145204934" watchObservedRunningTime="2025-12-12 00:35:25.779709156 +0000 UTC m=+6.145356200"
	Dec 12 00:35:25 newest-cni-821472 kubelet[1309]: I1212 00:35:25.779841    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-j79t9" podStartSLOduration=0.779836188 podStartE2EDuration="779.836188ms" podCreationTimestamp="2025-12-12 00:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:35:25.765755417 +0000 UTC m=+6.131402465" watchObservedRunningTime="2025-12-12 00:35:25.779836188 +0000 UTC m=+6.145483234"
	Dec 12 00:35:26 newest-cni-821472 kubelet[1309]: E1212 00:35:26.449534    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-821472" containerName="kube-controller-manager"
	Dec 12 00:35:27 newest-cni-821472 kubelet[1309]: E1212 00:35:27.142090    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-821472" containerName="etcd"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-821472 -n newest-cni-821472
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-821472 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-jh7k7 storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-821472 describe pod coredns-7d764666f9-jh7k7 storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-821472 describe pod coredns-7d764666f9-jh7k7 storage-provisioner: exit status 1 (57.034611ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jh7k7" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-821472 describe pod coredns-7d764666f9-jh7k7 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-079970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-079970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (263.701065ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:35:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-079970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-079970 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-079970 describe deploy/metrics-server -n kube-system: exit status 1 (69.326641ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-079970 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-079970
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-079970:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84",
	        "Created": "2025-12-12T00:35:00.648347206Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 302296,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:35:01.194154937Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84/hostname",
	        "HostsPath": "/var/lib/docker/containers/d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84/hosts",
	        "LogPath": "/var/lib/docker/containers/d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84/d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84-json.log",
	        "Name": "/default-k8s-diff-port-079970",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-079970:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-079970",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84",
	                "LowerDir": "/var/lib/docker/overlay2/f4c9209913a7e2c4752af1f8012a218062093f65de9760f0c38133c56c619028-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f4c9209913a7e2c4752af1f8012a218062093f65de9760f0c38133c56c619028/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f4c9209913a7e2c4752af1f8012a218062093f65de9760f0c38133c56c619028/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f4c9209913a7e2c4752af1f8012a218062093f65de9760f0c38133c56c619028/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-079970",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-079970/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-079970",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-079970",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-079970",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f620ad3fc7ae233b5b4814a16a4bc9eedc770c336c638b1246220439e8538c42",
	            "SandboxKey": "/var/run/docker/netns/f620ad3fc7ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-079970": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e9d719bd40fd60bd78307adede76477356bbd0153233e68c6cf65e5ad664376",
	                    "EndpointID": "be1d819d425dd0e51d072d6041bf83026757711c045dbee0c2ce3b233c6d89ed",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "82:06:3a:d9:ec:31",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-079970",
	                        "d079df7029d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079970 -n default-k8s-diff-port-079970
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-079970 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-079970 logs -n 25: (1.099060345s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p no-preload-675290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-858659 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:35 UTC │
	│ image   │ old-k8s-version-743506 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p old-k8s-version-743506 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p old-k8s-version-743506                                                                                                                                                                                                                            │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ image   │ no-preload-675290 image list --format=json                                                                                                                                                                                                           │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p no-preload-675290 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p old-k8s-version-743506                                                                                                                                                                                                                            │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ delete  │ -p disable-driver-mounts-039387                                                                                                                                                                                                                      │ disable-driver-mounts-039387 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p default-k8s-diff-port-079970 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079970 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:35 UTC │
	│ delete  │ -p no-preload-675290                                                                                                                                                                                                                                 │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:35 UTC │
	│ delete  │ -p no-preload-675290                                                                                                                                                                                                                                 │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ start   │ -p newest-cni-821472 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image   │ embed-certs-858659 image list --format=json                                                                                                                                                                                                          │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ pause   │ -p embed-certs-858659 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-821472 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ delete  │ -p embed-certs-858659                                                                                                                                                                                                                                │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ stop    │ -p newest-cni-821472 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ delete  │ -p embed-certs-858659                                                                                                                                                                                                                                │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ start   │ -p auto-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-129742                  │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-821472 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ start   │ -p newest-cni-821472 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-079970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-079970 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:35:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:35:36.615003  313657 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:36.615132  313657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:36.615144  313657 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:36.615150  313657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:36.615357  313657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:35:36.615821  313657 out.go:368] Setting JSON to false
	I1212 00:35:36.616985  313657 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4683,"bootTime":1765495054,"procs":426,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:35:36.617037  313657 start.go:143] virtualization: kvm guest
	I1212 00:35:36.619333  313657 out.go:179] * [newest-cni-821472] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:35:36.620570  313657 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:35:36.620576  313657 notify.go:221] Checking for updates...
	I1212 00:35:36.623071  313657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:35:36.624619  313657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:35:36.625809  313657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:35:36.627128  313657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:35:36.628277  313657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:35:36.629940  313657 config.go:182] Loaded profile config "newest-cni-821472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:36.630509  313657 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:35:36.653789  313657 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:35:36.653878  313657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:36.711943  313657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 00:35:36.702406624 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:35:36.712042  313657 docker.go:319] overlay module found
	I1212 00:35:36.713883  313657 out.go:179] * Using the docker driver based on existing profile
	I1212 00:35:36.715171  313657 start.go:309] selected driver: docker
	I1212 00:35:36.715189  313657 start.go:927] validating driver "docker" against &{Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:36.715263  313657 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:35:36.715863  313657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:36.770538  313657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 00:35:36.761701455 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:35:36.770807  313657 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 00:35:36.770828  313657 cni.go:84] Creating CNI manager for ""
	I1212 00:35:36.770886  313657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:36.770927  313657 start.go:353] cluster config:
	{Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:36.772612  313657 out.go:179] * Starting "newest-cni-821472" primary control-plane node in "newest-cni-821472" cluster
	I1212 00:35:36.773801  313657 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:35:36.774921  313657 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:35:36.775828  313657 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:36.775859  313657 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1212 00:35:36.775873  313657 cache.go:65] Caching tarball of preloaded images
	I1212 00:35:36.775931  313657 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:35:36.775948  313657 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:35:36.775956  313657 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 00:35:36.776059  313657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/config.json ...
	I1212 00:35:36.796954  313657 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:35:36.796975  313657 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:35:36.796994  313657 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:35:36.797025  313657 start.go:360] acquireMachinesLock for newest-cni-821472: {Name:mk1920b4afd40f764aad092389429d0db04875a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:35:36.797095  313657 start.go:364] duration metric: took 42.709µs to acquireMachinesLock for "newest-cni-821472"
	I1212 00:35:36.797116  313657 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:35:36.797133  313657 fix.go:54] fixHost starting: 
	I1212 00:35:36.797415  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:36.815009  313657 fix.go:112] recreateIfNeeded on newest-cni-821472: state=Stopped err=<nil>
	W1212 00:35:36.815030  313657 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:35:35.782567  300250 pod_ready.go:83] waiting for pod "kube-proxy-dp8fl" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:36.182605  300250 pod_ready.go:94] pod "kube-proxy-dp8fl" is "Ready"
	I1212 00:35:36.182630  300250 pod_ready.go:86] duration metric: took 400.039495ms for pod "kube-proxy-dp8fl" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:36.383406  300250 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-079970" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:36.782412  300250 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-079970" is "Ready"
	I1212 00:35:36.782432  300250 pod_ready.go:86] duration metric: took 399.007194ms for pod "kube-scheduler-default-k8s-diff-port-079970" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:36.782442  300250 pod_ready.go:40] duration metric: took 1.604170425s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:35:36.826188  300250 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 00:35:36.828786  300250 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-079970" cluster and "default" namespace by default
	I1212 00:35:32.667738  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 00:35:32.667810  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:32.667861  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:32.694152  263844 cri.go:89] found id: "e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29"
	I1212 00:35:32.694178  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:32.694185  263844 cri.go:89] found id: ""
	I1212 00:35:32.694195  263844 logs.go:282] 2 containers: [e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:32.694250  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:32.698030  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:32.701602  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:32.701662  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:32.726279  263844 cri.go:89] found id: ""
	I1212 00:35:32.726303  263844 logs.go:282] 0 containers: []
	W1212 00:35:32.726313  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:32.726321  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:32.726370  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:32.752014  263844 cri.go:89] found id: ""
	I1212 00:35:32.752038  263844 logs.go:282] 0 containers: []
	W1212 00:35:32.752046  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:32.752052  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:32.752101  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:32.778627  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:32.778649  263844 cri.go:89] found id: ""
	I1212 00:35:32.778659  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:32.778720  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:32.782297  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:32.782345  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:32.807291  263844 cri.go:89] found id: ""
	I1212 00:35:32.807319  263844 logs.go:282] 0 containers: []
	W1212 00:35:32.807329  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:32.807336  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:32.807379  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:32.831788  263844 cri.go:89] found id: "053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95"
	I1212 00:35:32.831807  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:32.831813  263844 cri.go:89] found id: ""
	I1212 00:35:32.831822  263844 logs.go:282] 2 containers: [053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:32.831874  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:32.835418  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:32.838729  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:32.838789  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:32.862931  263844 cri.go:89] found id: ""
	I1212 00:35:32.862951  263844 logs.go:282] 0 containers: []
	W1212 00:35:32.862958  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:32.862963  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:32.863004  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:32.887935  263844 cri.go:89] found id: ""
	I1212 00:35:32.887956  263844 logs.go:282] 0 containers: []
	W1212 00:35:32.887966  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:32.887983  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:32.887995  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:32.975983  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:32.976009  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:32.989497  263844 logs.go:123] Gathering logs for kube-controller-manager [053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95] ...
	I1212 00:35:32.989524  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95"
	I1212 00:35:33.013710  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:33.013734  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:33.038040  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:33.038062  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:33.065603  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:33.065624  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 00:35:34.587325  311720 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:35:34.610803  311720 cli_runner.go:164] Run: docker container inspect auto-129742 --format={{.State.Status}}
	I1212 00:35:34.631514  311720 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:35:34.631535  311720 kic_runner.go:114] Args: [docker exec --privileged auto-129742 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:35:34.678876  311720 cli_runner.go:164] Run: docker container inspect auto-129742 --format={{.State.Status}}
	I1212 00:35:34.698678  311720 machine.go:94] provisionDockerMachine start ...
	I1212 00:35:34.698797  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:34.720756  311720 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:34.721018  311720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 00:35:34.721035  311720 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:35:34.721699  311720 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40998->127.0.0.1:33098: read: connection reset by peer
	I1212 00:35:37.853402  311720 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-129742
	
	I1212 00:35:37.853432  311720 ubuntu.go:182] provisioning hostname "auto-129742"
	I1212 00:35:37.853555  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:37.876199  311720 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:37.876566  311720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 00:35:37.876594  311720 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-129742 && echo "auto-129742" | sudo tee /etc/hostname
	I1212 00:35:38.026989  311720 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-129742
	
	I1212 00:35:38.027083  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.046329  311720 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:38.046564  311720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 00:35:38.046581  311720 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-129742' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-129742/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-129742' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:35:38.176692  311720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:35:38.176731  311720 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:35:38.176755  311720 ubuntu.go:190] setting up certificates
	I1212 00:35:38.176765  311720 provision.go:84] configureAuth start
	I1212 00:35:38.176827  311720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-129742
	I1212 00:35:38.194501  311720 provision.go:143] copyHostCerts
	I1212 00:35:38.194561  311720 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:35:38.194575  311720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:35:38.194653  311720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:35:38.194789  311720 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:35:38.194801  311720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:35:38.194844  311720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:35:38.194942  311720 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:35:38.194954  311720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:35:38.194988  311720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:35:38.195075  311720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.auto-129742 san=[127.0.0.1 192.168.94.2 auto-129742 localhost minikube]
	I1212 00:35:38.258148  311720 provision.go:177] copyRemoteCerts
	I1212 00:35:38.258201  311720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:35:38.258239  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.276391  311720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa Username:docker}
	I1212 00:35:38.371020  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:35:38.389078  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:35:38.405517  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1212 00:35:38.421620  311720 provision.go:87] duration metric: took 244.837784ms to configureAuth
	I1212 00:35:38.421642  311720 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:35:38.421782  311720 config.go:182] Loaded profile config "auto-129742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:35:38.421872  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.439397  311720 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:38.439621  311720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 00:35:38.439637  311720 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:35:38.705109  311720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:35:38.705134  311720 machine.go:97] duration metric: took 4.006432612s to provisionDockerMachine
	I1212 00:35:38.705148  311720 client.go:176] duration metric: took 8.918452873s to LocalClient.Create
	I1212 00:35:38.705164  311720 start.go:167] duration metric: took 8.91850497s to libmachine.API.Create "auto-129742"
	I1212 00:35:38.705174  311720 start.go:293] postStartSetup for "auto-129742" (driver="docker")
	I1212 00:35:38.705195  311720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:35:38.705258  311720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:35:38.705308  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.722687  311720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa Username:docker}
	I1212 00:35:38.818543  311720 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:35:38.821829  311720 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:35:38.821860  311720 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:35:38.821871  311720 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:35:38.821922  311720 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:35:38.821994  311720 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:35:38.822078  311720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:35:38.829079  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:38.847819  311720 start.go:296] duration metric: took 142.630457ms for postStartSetup
	I1212 00:35:38.848136  311720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-129742
	I1212 00:35:38.866156  311720 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/config.json ...
	I1212 00:35:38.866447  311720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:35:38.866537  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.884046  311720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa Username:docker}
	I1212 00:35:38.974764  311720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:35:38.978927  311720 start.go:128] duration metric: took 9.194206502s to createHost
	I1212 00:35:38.978948  311720 start.go:83] releasing machines lock for "auto-129742", held for 9.194343995s
	I1212 00:35:38.979007  311720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-129742
	I1212 00:35:38.997762  311720 ssh_runner.go:195] Run: cat /version.json
	I1212 00:35:38.997818  311720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:35:38.997831  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.997895  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:39.016623  311720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa Username:docker}
	I1212 00:35:39.017350  311720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa Username:docker}
	I1212 00:35:39.161811  311720 ssh_runner.go:195] Run: systemctl --version
	I1212 00:35:39.167759  311720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:35:39.200505  311720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:35:39.204645  311720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:35:39.204707  311720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:35:39.228273  311720 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:35:39.228295  311720 start.go:496] detecting cgroup driver to use...
	I1212 00:35:39.228326  311720 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:35:39.228363  311720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:35:39.243212  311720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:35:39.254237  311720 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:35:39.254286  311720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:35:39.270799  311720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:35:39.286853  311720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:35:39.366180  311720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:35:39.449288  311720 docker.go:234] disabling docker service ...
	I1212 00:35:39.449359  311720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:35:39.466333  311720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:35:39.478905  311720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:35:39.560467  311720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:35:39.640214  311720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:35:39.651768  311720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:35:39.664769  311720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:35:39.664823  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.674135  311720 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:35:39.674187  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.682174  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.689970  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.698157  311720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:35:39.705528  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.713225  311720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.725377  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.733216  311720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:35:39.740029  311720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:35:39.746779  311720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:39.824567  311720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:35:39.964512  311720 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:35:39.964586  311720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:35:39.968445  311720 start.go:564] Will wait 60s for crictl version
	I1212 00:35:39.968513  311720 ssh_runner.go:195] Run: which crictl
	I1212 00:35:39.971962  311720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:35:39.995943  311720 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:35:39.996015  311720 ssh_runner.go:195] Run: crio --version
	I1212 00:35:40.023093  311720 ssh_runner.go:195] Run: crio --version
	I1212 00:35:40.050712  311720 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:35:40.051747  311720 cli_runner.go:164] Run: docker network inspect auto-129742 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:35:40.068586  311720 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1212 00:35:40.072316  311720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:40.082284  311720 kubeadm.go:884] updating cluster {Name:auto-129742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-129742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:35:40.082406  311720 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:35:40.082448  311720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:40.112765  311720 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:40.112784  311720 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:35:40.112821  311720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:40.137112  311720 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:40.137133  311720 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:35:40.137142  311720 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1212 00:35:40.137235  311720 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-129742 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-129742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:35:40.137334  311720 ssh_runner.go:195] Run: crio config
	I1212 00:35:40.182977  311720 cni.go:84] Creating CNI manager for ""
	I1212 00:35:40.183007  311720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:40.183035  311720 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:35:40.183068  311720 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-129742 NodeName:auto-129742 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:35:40.183243  311720 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-129742"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:35:40.183329  311720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:35:40.191388  311720 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:35:40.191460  311720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:35:40.199299  311720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1212 00:35:40.211054  311720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:35:40.225785  311720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1212 00:35:40.237527  311720 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:35:40.240865  311720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:40.250170  311720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:40.337022  311720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:40.355615  311720 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742 for IP: 192.168.94.2
	I1212 00:35:40.355638  311720 certs.go:195] generating shared ca certs ...
	I1212 00:35:40.355661  311720 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.355816  311720 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:35:40.355874  311720 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:35:40.355887  311720 certs.go:257] generating profile certs ...
	I1212 00:35:40.355951  311720 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.key
	I1212 00:35:40.355973  311720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.crt with IP's: []
	I1212 00:35:40.592669  311720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.crt ...
	I1212 00:35:40.592699  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.crt: {Name:mk76e1dd7172803fbf1cbffa40c75cb48c0a838a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.592881  311720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.key ...
	I1212 00:35:40.592899  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.key: {Name:mk1935c540fad051cff06def660ab58dd355b134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.593007  311720 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key.61554351
	I1212 00:35:40.593028  311720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt.61554351 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1212 00:35:40.639283  311720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt.61554351 ...
	I1212 00:35:40.639305  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt.61554351: {Name:mk9861694cb000aec3a0bd5942993e1c1f27a76b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.639440  311720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key.61554351 ...
	I1212 00:35:40.639453  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key.61554351: {Name:mkcce82003e0c847ecd0e8e701673ac85b4767d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.639537  311720 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt.61554351 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt
	I1212 00:35:40.639609  311720 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key.61554351 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key
	I1212 00:35:40.639661  311720 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.key
	I1212 00:35:40.639675  311720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.crt with IP's: []
	I1212 00:35:40.754363  311720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.crt ...
	I1212 00:35:40.754386  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.crt: {Name:mkf4fd1cbb46bff32b7eda0419dad9a557f8a6d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.754562  311720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.key ...
	I1212 00:35:40.754578  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.key: {Name:mke823010157e4fd78cfa585a52980304b928622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.754789  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:35:40.754836  311720 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:35:40.754851  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:35:40.754887  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:35:40.754924  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:35:40.754959  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:35:40.755014  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:40.755584  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:35:40.773207  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:35:40.790251  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:35:40.807658  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:35:40.823962  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1212 00:35:40.840940  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:35:40.857607  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:35:40.874553  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:35:40.891098  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:35:40.909544  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:35:40.925460  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:35:40.941738  311720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:35:40.953181  311720 ssh_runner.go:195] Run: openssl version
	I1212 00:35:40.958781  311720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:40.965400  311720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:35:40.972141  311720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:40.975535  311720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:40.975586  311720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:41.012011  311720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:35:41.019203  311720 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:35:41.026483  311720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:35:41.033388  311720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:35:41.040112  311720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:35:41.043591  311720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:35:41.043637  311720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:35:41.082194  311720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:35:41.090166  311720 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:35:41.097782  311720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:35:41.106091  311720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:35:41.113295  311720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:35:41.117095  311720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:35:41.117155  311720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:35:41.153797  311720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:41.161240  311720 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:41.168521  311720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:35:41.171803  311720 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:35:41.171846  311720 kubeadm.go:401] StartCluster: {Name:auto-129742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-129742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:41.171906  311720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:35:41.171939  311720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:35:41.197104  311720 cri.go:89] found id: ""
	I1212 00:35:41.197153  311720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:35:41.204439  311720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:35:41.211497  311720 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:35:41.211545  311720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:35:41.218528  311720 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:35:41.218544  311720 kubeadm.go:158] found existing configuration files:
	
	I1212 00:35:41.218580  311720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:35:41.225466  311720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:35:41.225526  311720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:35:41.232209  311720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:35:41.239422  311720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:35:41.239471  311720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:35:41.246617  311720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:35:41.253661  311720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:35:41.253706  311720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:35:41.260600  311720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:35:41.268103  311720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:35:41.268156  311720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:35:41.275464  311720 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:35:41.314774  311720 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 00:35:41.314819  311720 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:35:41.333053  311720 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:35:41.333128  311720 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 00:35:41.333187  311720 kubeadm.go:319] OS: Linux
	I1212 00:35:41.333281  311720 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:35:41.333353  311720 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:35:41.333411  311720 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:35:41.333538  311720 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:35:41.333609  311720 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:35:41.333670  311720 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:35:41.333731  311720 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:35:41.333787  311720 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 00:35:41.391557  311720 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:35:41.391703  311720 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:35:41.391828  311720 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:35:41.399245  311720 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:35:36.816773  313657 out.go:252] * Restarting existing docker container for "newest-cni-821472" ...
	I1212 00:35:36.816850  313657 cli_runner.go:164] Run: docker start newest-cni-821472
	I1212 00:35:37.080371  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:37.099121  313657 kic.go:430] container "newest-cni-821472" state is running.
	I1212 00:35:37.099565  313657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:37.118114  313657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/config.json ...
	I1212 00:35:37.118375  313657 machine.go:94] provisionDockerMachine start ...
	I1212 00:35:37.118451  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:37.136773  313657 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:37.137029  313657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 00:35:37.137045  313657 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:35:37.137774  313657 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53888->127.0.0.1:33103: read: connection reset by peer
	I1212 00:35:40.271768  313657 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-821472
	
	I1212 00:35:40.271794  313657 ubuntu.go:182] provisioning hostname "newest-cni-821472"
	I1212 00:35:40.271842  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:40.294413  313657 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:40.294663  313657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 00:35:40.294683  313657 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-821472 && echo "newest-cni-821472" | sudo tee /etc/hostname
	I1212 00:35:40.435333  313657 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-821472
	
	I1212 00:35:40.435409  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:40.455014  313657 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:40.455218  313657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 00:35:40.455234  313657 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-821472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-821472/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-821472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:35:40.586503  313657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:35:40.586531  313657 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:35:40.586569  313657 ubuntu.go:190] setting up certificates
	I1212 00:35:40.586583  313657 provision.go:84] configureAuth start
	I1212 00:35:40.586659  313657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:40.605310  313657 provision.go:143] copyHostCerts
	I1212 00:35:40.605389  313657 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:35:40.605409  313657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:35:40.605489  313657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:35:40.605616  313657 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:35:40.605630  313657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:35:40.605684  313657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:35:40.605770  313657 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:35:40.605779  313657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:35:40.605818  313657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:35:40.605935  313657 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.newest-cni-821472 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-821472]
	I1212 00:35:40.662347  313657 provision.go:177] copyRemoteCerts
	I1212 00:35:40.662411  313657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:35:40.662468  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:40.681297  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:40.777054  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:35:40.795024  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:35:40.811612  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:35:40.827828  313657 provision.go:87] duration metric: took 241.225014ms to configureAuth
	I1212 00:35:40.827853  313657 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:35:40.828024  313657 config.go:182] Loaded profile config "newest-cni-821472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:40.828117  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:40.846619  313657 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:40.846888  313657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 00:35:40.846924  313657 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:35:41.128685  313657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:35:41.128707  313657 machine.go:97] duration metric: took 4.01031546s to provisionDockerMachine
	I1212 00:35:41.128720  313657 start.go:293] postStartSetup for "newest-cni-821472" (driver="docker")
	I1212 00:35:41.128735  313657 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:35:41.128800  313657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:35:41.128844  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:41.146619  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:41.241954  313657 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:35:41.245283  313657 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:35:41.245314  313657 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:35:41.245327  313657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:35:41.245380  313657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:35:41.245456  313657 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:35:41.245579  313657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:35:41.252803  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:41.270439  313657 start.go:296] duration metric: took 141.702762ms for postStartSetup
	I1212 00:35:41.270533  313657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:35:41.270589  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:41.291253  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:41.385395  313657 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:35:41.389632  313657 fix.go:56] duration metric: took 4.592500888s for fixHost
	I1212 00:35:41.389658  313657 start.go:83] releasing machines lock for "newest-cni-821472", held for 4.592550322s
	I1212 00:35:41.389719  313657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:41.409111  313657 ssh_runner.go:195] Run: cat /version.json
	I1212 00:35:41.409194  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:41.409202  313657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:35:41.409264  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:41.426685  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:41.427390  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:41.573962  313657 ssh_runner.go:195] Run: systemctl --version
	I1212 00:35:41.580109  313657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:35:41.612958  313657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:35:41.617419  313657 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:35:41.617521  313657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:35:41.625155  313657 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:35:41.625174  313657 start.go:496] detecting cgroup driver to use...
	I1212 00:35:41.625206  313657 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:35:41.625270  313657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:35:41.638804  313657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:35:41.650436  313657 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:35:41.650494  313657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:35:41.663651  313657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:35:41.674712  313657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:35:41.753037  313657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:35:41.833338  313657 docker.go:234] disabling docker service ...
	I1212 00:35:41.833390  313657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:35:41.847161  313657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:35:41.858716  313657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:35:41.941176  313657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:35:42.022532  313657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:35:42.034350  313657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:35:42.047735  313657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:35:42.047788  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.055989  313657 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:35:42.056046  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.064168  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.072270  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.080522  313657 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:35:42.087789  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.095782  313657 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.103399  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.111579  313657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:35:42.118175  313657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:35:42.124818  313657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:42.202070  313657 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:35:42.349497  313657 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:35:42.349576  313657 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:35:42.353384  313657 start.go:564] Will wait 60s for crictl version
	I1212 00:35:42.353437  313657 ssh_runner.go:195] Run: which crictl
	I1212 00:35:42.356828  313657 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:35:42.380925  313657 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:35:42.380991  313657 ssh_runner.go:195] Run: crio --version
	I1212 00:35:42.408051  313657 ssh_runner.go:195] Run: crio --version
	I1212 00:35:42.437900  313657 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 00:35:42.439134  313657 cli_runner.go:164] Run: docker network inspect newest-cni-821472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:35:42.458296  313657 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 00:35:42.462111  313657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:42.473493  313657 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 00:35:42.474492  313657 kubeadm.go:884] updating cluster {Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:35:42.474641  313657 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:42.474706  313657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:42.503840  313657 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:42.503857  313657 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:35:42.503898  313657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:42.527719  313657 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:42.527738  313657 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:35:42.527746  313657 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 00:35:42.527850  313657 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-821472 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:35:42.527930  313657 ssh_runner.go:195] Run: crio config
	I1212 00:35:42.586235  313657 cni.go:84] Creating CNI manager for ""
	I1212 00:35:42.586263  313657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:42.586281  313657 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 00:35:42.586309  313657 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-821472 NodeName:newest-cni-821472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:35:42.586491  313657 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-821472"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:35:42.586563  313657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:35:42.595617  313657 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:35:42.595679  313657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:35:42.603149  313657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:35:42.615085  313657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:35:42.626755  313657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1212 00:35:42.638262  313657 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:35:42.641589  313657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:42.650971  313657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:42.728755  313657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:42.751540  313657 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472 for IP: 192.168.76.2
	I1212 00:35:42.751556  313657 certs.go:195] generating shared ca certs ...
	I1212 00:35:42.751573  313657 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:42.751733  313657 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:35:42.751802  313657 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:35:42.751819  313657 certs.go:257] generating profile certs ...
	I1212 00:35:42.751927  313657 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.key
	I1212 00:35:42.751999  313657 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key.e08375e0
	I1212 00:35:42.752048  313657 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key
	I1212 00:35:42.752192  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:35:42.752235  313657 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:35:42.752248  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:35:42.752283  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:35:42.752318  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:35:42.752360  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:35:42.752415  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:42.753053  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:35:42.770854  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:35:42.790078  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:35:42.807620  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:35:42.828730  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:35:42.849521  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:35:42.865729  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:35:42.881653  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:35:42.898013  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:35:42.914298  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:35:42.930406  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:35:42.947383  313657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:35:42.959592  313657 ssh_runner.go:195] Run: openssl version
	I1212 00:35:42.965251  313657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:42.971937  313657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:35:42.978671  313657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:42.982018  313657 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:42.982066  313657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:43.016314  313657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:35:43.023303  313657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:35:43.030374  313657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:35:43.037133  313657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:35:43.040548  313657 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:35:43.040580  313657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:35:43.074151  313657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:35:43.081008  313657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:35:43.087822  313657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:35:43.094643  313657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:35:43.098274  313657 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:35:43.098321  313657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:35:43.133554  313657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:43.141912  313657 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:35:43.146261  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:35:43.186934  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:35:43.225313  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:35:43.263428  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:35:43.309905  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:35:43.363587  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:35:43.422765  313657 kubeadm.go:401] StartCluster: {Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:43.422885  313657 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:35:43.422972  313657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:35:43.457007  313657 cri.go:89] found id: "0a27b96d1bb7b73ed434ac959df9965a37373d6cec1e595449d9d42feb886a00"
	I1212 00:35:43.457043  313657 cri.go:89] found id: "9783b15121a958e88f6034a4a1e6706329e424058eba2e6f8347546407dabcf0"
	I1212 00:35:43.457056  313657 cri.go:89] found id: "174c7ee6c2a1a9165b183524a2e37c57f1d18763ab56511debb798c7745ad211"
	I1212 00:35:43.457062  313657 cri.go:89] found id: "d7a4fba82f0eabc43ef5bf98075ffc80893b0fb5b78287f6ce879a700f528fe8"
	I1212 00:35:43.457066  313657 cri.go:89] found id: ""
	I1212 00:35:43.457148  313657 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 00:35:43.470031  313657 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:35:43Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:35:43.470109  313657 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:35:43.478723  313657 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:35:43.478750  313657 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:35:43.478793  313657 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:35:43.486064  313657 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:35:43.486980  313657 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-821472" does not appear in /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:35:43.487360  313657 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-10975/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-821472" cluster setting kubeconfig missing "newest-cni-821472" context setting]
	I1212 00:35:43.488053  313657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:43.489882  313657 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:35:43.497249  313657 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1212 00:35:43.497276  313657 kubeadm.go:602] duration metric: took 18.518865ms to restartPrimaryControlPlane
	I1212 00:35:43.497296  313657 kubeadm.go:403] duration metric: took 74.544874ms to StartCluster
	I1212 00:35:43.497311  313657 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:43.497364  313657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:35:43.498716  313657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:43.498945  313657 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:35:43.499041  313657 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:35:43.499128  313657 config.go:182] Loaded profile config "newest-cni-821472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:43.499138  313657 addons.go:70] Setting dashboard=true in profile "newest-cni-821472"
	I1212 00:35:43.499151  313657 addons.go:239] Setting addon dashboard=true in "newest-cni-821472"
	W1212 00:35:43.499160  313657 addons.go:248] addon dashboard should already be in state true
	I1212 00:35:43.499168  313657 addons.go:70] Setting default-storageclass=true in profile "newest-cni-821472"
	I1212 00:35:43.499128  313657 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-821472"
	I1212 00:35:43.499189  313657 host.go:66] Checking if "newest-cni-821472" exists ...
	I1212 00:35:43.499192  313657 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-821472"
	I1212 00:35:43.499191  313657 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-821472"
	W1212 00:35:43.499199  313657 addons.go:248] addon storage-provisioner should already be in state true
	I1212 00:35:43.499216  313657 host.go:66] Checking if "newest-cni-821472" exists ...
	I1212 00:35:43.499523  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:43.499674  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:43.499807  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:43.502446  313657 out.go:179] * Verifying Kubernetes components...
	I1212 00:35:43.503509  313657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:43.528175  313657 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 00:35:43.528255  313657 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:35:43.529215  313657 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:35:43.529234  313657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:35:43.529309  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:43.530562  313657 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 00:35:41.403306  311720 out.go:252]   - Generating certificates and keys ...
	I1212 00:35:41.403420  311720 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:35:41.403546  311720 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:35:41.610366  311720 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:35:41.904549  311720 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:35:42.236869  311720 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:35:42.432939  311720 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:35:42.670711  311720 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:35:42.670850  311720 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-129742 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 00:35:42.733388  311720 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:35:42.733562  311720 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-129742 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 00:35:42.890557  311720 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:35:43.189890  311720 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:35:43.432084  311720 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:35:43.432181  311720 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:35:44.496937  311720 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:35:43.532520  313657 addons.go:239] Setting addon default-storageclass=true in "newest-cni-821472"
	W1212 00:35:43.532542  313657 addons.go:248] addon default-storageclass should already be in state true
	I1212 00:35:43.532571  313657 host.go:66] Checking if "newest-cni-821472" exists ...
	I1212 00:35:43.533019  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:43.533188  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 00:35:43.533204  313657 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 00:35:43.533268  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:43.564658  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:43.566726  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:43.571698  313657 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:35:43.571744  313657 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:35:43.571811  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:43.596063  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:43.673895  313657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:43.689745  313657 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:35:43.689816  313657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:35:43.691914  313657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:35:43.694831  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 00:35:43.694849  313657 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 00:35:43.704985  313657 api_server.go:72] duration metric: took 206.002727ms to wait for apiserver process to appear ...
	I1212 00:35:43.705014  313657 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:35:43.705040  313657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:43.711963  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 00:35:43.711982  313657 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 00:35:43.713874  313657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:35:43.728659  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 00:35:43.728713  313657 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 00:35:43.745923  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 00:35:43.745946  313657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 00:35:43.764806  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 00:35:43.764899  313657 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 00:35:43.779853  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 00:35:43.779879  313657 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 00:35:43.794230  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 00:35:43.794256  313657 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 00:35:43.807520  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 00:35:43.807543  313657 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 00:35:43.823054  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:35:43.823074  313657 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 00:35:43.836827  313657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:35:44.593775  313657 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 00:35:44.593804  313657 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 00:35:44.593820  313657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:44.599187  313657 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 00:35:44.599214  313657 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 00:35:44.705950  313657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:44.710897  313657 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:35:44.710933  313657 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:35:45.196541  313657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.50459537s)
	I1212 00:35:45.196600  313657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.482698107s)
	I1212 00:35:45.196696  313657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.359839372s)
	I1212 00:35:45.198335  313657 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-821472 addons enable metrics-server
	
	I1212 00:35:45.205281  313657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:45.210437  313657 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:35:45.210462  313657 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:35:45.213577  313657 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1212 00:35:44.808888  311720 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:35:45.083910  311720 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:35:45.340764  311720 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:35:45.748663  311720 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:35:45.749566  311720 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:35:45.754425  311720 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Dec 12 00:35:34 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:34.499272387Z" level=info msg="Started container" PID=1846 containerID=c0e515785a5fc86b1cefa4e62e827f9cd92ff9f23c7517c274e3edbc356ab064 description=kube-system/coredns-66bc5c9577-jdmpv/coredns id=6b3b143c-83f7-4126-8e98-088d9ceea9f6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=31e6167bb92553ea1da5bce7373b9f35b087498690001670edf4ee847574988d
	Dec 12 00:35:34 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:34.499682907Z" level=info msg="Started container" PID=1845 containerID=fe69c736c9e0264b3e207a193ae576bdba909900bf7836eec89c3820c042f145 description=kube-system/storage-provisioner/storage-provisioner id=df25fde1-9274-4085-9116-0e2f66489eda name=/runtime.v1.RuntimeService/StartContainer sandboxID=97f0f3e18fe81499724e78e5e1b00d342304c17632282ed7c92701d36c0d7bae
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.311011957Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5d5743d0-1ffb-45ba-82a1-3dcce35e98de name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.311100509Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.316939837Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:35c5487ffac7ead72913ecb5db6640c811c2f493f72491f8ad13018fa232c325 UID:dac1f5b6-efb8-48cc-90f3-3ba30e837989 NetNS:/var/run/netns/6cc5bb06-13d2-40fe-966d-3815349054c5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009ac6e0}] Aliases:map[]}"
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.31710995Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.33107244Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:35c5487ffac7ead72913ecb5db6640c811c2f493f72491f8ad13018fa232c325 UID:dac1f5b6-efb8-48cc-90f3-3ba30e837989 NetNS:/var/run/netns/6cc5bb06-13d2-40fe-966d-3815349054c5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009ac6e0}] Aliases:map[]}"
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.331377939Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.333313666Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.334553776Z" level=info msg="Ran pod sandbox 35c5487ffac7ead72913ecb5db6640c811c2f493f72491f8ad13018fa232c325 with infra container: default/busybox/POD" id=5d5743d0-1ffb-45ba-82a1-3dcce35e98de name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.335924513Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e9e98d4f-0cf0-41ff-90d3-b63c90b5de43 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.336058449Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e9e98d4f-0cf0-41ff-90d3-b63c90b5de43 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.336138939Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e9e98d4f-0cf0-41ff-90d3-b63c90b5de43 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.337013482Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f37cc3fb-5881-4af5-927d-35877d62545f name=/runtime.v1.ImageService/PullImage
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.342134032Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.988862997Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f37cc3fb-5881-4af5-927d-35877d62545f name=/runtime.v1.ImageService/PullImage
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.989333225Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4b64269b-3bbf-41f6-b493-91c17c0ab3d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.990415938Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=306b7186-620f-4430-8186-b0f44a889c44 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.993228097Z" level=info msg="Creating container: default/busybox/busybox" id=f5d94daa-48e3-4085-9076-446d3a4ed34a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.993346917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.996647275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:37 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:37.997005188Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:38 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:38.025984963Z" level=info msg="Created container b1a8816f38b977412d8447d4776873753844aca37a14b3eebff1f7faf038c08c: default/busybox/busybox" id=f5d94daa-48e3-4085-9076-446d3a4ed34a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:38 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:38.026531957Z" level=info msg="Starting container: b1a8816f38b977412d8447d4776873753844aca37a14b3eebff1f7faf038c08c" id=b877a5e7-b6bd-426c-a476-af07fe018bf6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:35:38 default-k8s-diff-port-079970 crio[771]: time="2025-12-12T00:35:38.028598885Z" level=info msg="Started container" PID=1922 containerID=b1a8816f38b977412d8447d4776873753844aca37a14b3eebff1f7faf038c08c description=default/busybox/busybox id=b877a5e7-b6bd-426c-a476-af07fe018bf6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=35c5487ffac7ead72913ecb5db6640c811c2f493f72491f8ad13018fa232c325
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	b1a8816f38b97       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   35c5487ffac7e       busybox                                                default
	c0e515785a5fc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   31e6167bb9255       coredns-66bc5c9577-jdmpv                               kube-system
	fe69c736c9e02       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   97f0f3e18fe81       storage-provisioner                                    kube-system
	d4ef827821084       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   0ef1280a3a0d1       kindnet-g8hsv                                          kube-system
	489dedc4f133b       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      23 seconds ago      Running             kube-proxy                0                   911749ee5ea3f       kube-proxy-dp8fl                                       kube-system
	d230ba3547759       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      33 seconds ago      Running             kube-controller-manager   0                   28f200880d1d4       kube-controller-manager-default-k8s-diff-port-079970   kube-system
	98f6e69f5dfdc       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      33 seconds ago      Running             etcd                      0                   17fe3db0f6b03       etcd-default-k8s-diff-port-079970                      kube-system
	c04914af48483       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      33 seconds ago      Running             kube-scheduler            0                   85c7801e6e6a2       kube-scheduler-default-k8s-diff-port-079970            kube-system
	36c2ab76b9fb3       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      33 seconds ago      Running             kube-apiserver            0                   5a869110e2e4b       kube-apiserver-default-k8s-diff-port-079970            kube-system
	
	
	==> coredns [c0e515785a5fc86b1cefa4e62e827f9cd92ff9f23c7517c274e3edbc356ab064] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50839 - 60839 "HINFO IN 4758028250486113876.1007221336569843127. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066390769s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-079970
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-079970
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=default-k8s-diff-port-079970
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_35_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:35:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-079970
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:35:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:35:33 +0000   Fri, 12 Dec 2025 00:35:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:35:33 +0000   Fri, 12 Dec 2025 00:35:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:35:33 +0000   Fri, 12 Dec 2025 00:35:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:35:33 +0000   Fri, 12 Dec 2025 00:35:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-079970
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                2f6ae672-3816-4aaa-aade-b1dfd5ff98c4
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-jdmpv                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-default-k8s-diff-port-079970                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-g8hsv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-079970             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-079970    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-dp8fl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-079970             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s                kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s                kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s                kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s                node-controller  Node default-k8s-diff-port-079970 event: Registered Node default-k8s-diff-port-079970 in Controller
	  Normal  NodeReady                13s                kubelet          Node default-k8s-diff-port-079970 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [98f6e69f5dfdc6e6d5b68c345bcfaa1ae36966ff2867de6ae4d818a9c3a78633] <==
	{"level":"warn","ts":"2025-12-12T00:35:14.657402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.664453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.670586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.678121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.685204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.693035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.699106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.706558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.712412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.718541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.725888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.731980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.738585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.744682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.751096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.757952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.764456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.770523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.796245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.803703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.813428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:14.870698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53846","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T00:35:32.654561Z","caller":"traceutil/trace.go:172","msg":"trace[1945764762] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"127.032021ms","start":"2025-12-12T00:35:32.527510Z","end":"2025-12-12T00:35:32.654543Z","steps":["trace[1945764762] 'process raft request'  (duration: 126.876661ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:35:34.112453Z","caller":"traceutil/trace.go:172","msg":"trace[121646374] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"110.389497ms","start":"2025-12-12T00:35:34.002045Z","end":"2025-12-12T00:35:34.112434Z","steps":["trace[121646374] 'process raft request'  (duration: 110.271404ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:35:34.121008Z","caller":"traceutil/trace.go:172","msg":"trace[800519462] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"116.307926ms","start":"2025-12-12T00:35:34.004688Z","end":"2025-12-12T00:35:34.120996Z","steps":["trace[800519462] 'process raft request'  (duration: 116.221964ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:35:46 up  1:18,  0 user,  load average: 4.61, 3.38, 2.10
	Linux default-k8s-diff-port-079970 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d4ef82782108405f1e3205138dd6158f7f3febab771e2ae239cbb50ca48476b2] <==
	I1212 00:35:23.499718       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 00:35:23.500102       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1212 00:35:23.500252       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:35:23.500273       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:35:23.500300       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:35:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:35:23.886843       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:35:23.886901       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:35:23.886918       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:35:23.887060       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 00:35:24.187280       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:35:24.187395       1 metrics.go:72] Registering metrics
	I1212 00:35:24.187521       1 controller.go:711] "Syncing nftables rules"
	I1212 00:35:33.705665       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:35:33.705721       1 main.go:301] handling current node
	I1212 00:35:43.709580       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:35:43.709635       1 main.go:301] handling current node
	
	
	==> kube-apiserver [36c2ab76b9fb39c5f2475bf92bf35b2fad551f2532067b2d35406d7590722b94] <==
	I1212 00:35:15.372163       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:35:15.375318       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:35:15.375667       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1212 00:35:15.408491       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1212 00:35:15.456020       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 00:35:15.551411       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:35:16.260914       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 00:35:16.264836       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 00:35:16.264856       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 00:35:16.724503       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:35:16.764625       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:35:16.866231       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 00:35:16.872206       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1212 00:35:16.873391       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 00:35:16.876936       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:35:17.299352       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:35:18.092606       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:35:18.101250       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 00:35:18.108754       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 00:35:22.898797       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1212 00:35:22.898798       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1212 00:35:23.051231       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:35:23.055216       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:35:23.148428       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1212 00:35:45.106087       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:49886: use of closed network connection
	
	
	==> kube-controller-manager [d230ba3547759a2c895ef1b325cbe8fb7e34343eec9e0992a0763f563855d6d2] <==
	I1212 00:35:22.194231       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1212 00:35:22.194260       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 00:35:22.194375       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 00:35:22.194529       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-079970"
	I1212 00:35:22.194627       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1212 00:35:22.195507       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1212 00:35:22.195544       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1212 00:35:22.195859       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 00:35:22.196788       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 00:35:22.196811       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1212 00:35:22.196952       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 00:35:22.197811       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 00:35:22.197831       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 00:35:22.199041       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1212 00:35:22.200232       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1212 00:35:22.202510       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 00:35:22.204770       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 00:35:22.208059       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1212 00:35:22.215357       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 00:35:22.312000       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1212 00:35:22.405242       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 00:35:22.405265       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 00:35:22.405277       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 00:35:22.412400       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 00:35:37.196046       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [489dedc4f133b7cd87640ecd2cf73d09e637fb61ee8562c3e23f4ff5781ed5fd] <==
	I1212 00:35:23.310781       1 server_linux.go:53] "Using iptables proxy"
	I1212 00:35:23.378794       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 00:35:23.479505       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 00:35:23.479546       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1212 00:35:23.479647       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:35:23.500816       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:35:23.500873       1 server_linux.go:132] "Using iptables Proxier"
	I1212 00:35:23.507993       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:35:23.509888       1 server.go:527] "Version info" version="v1.34.2"
	I1212 00:35:23.509916       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:35:23.511581       1 config.go:309] "Starting node config controller"
	I1212 00:35:23.511598       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:35:23.511603       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:35:23.511619       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:35:23.511658       1 config.go:200] "Starting service config controller"
	I1212 00:35:23.511666       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:35:23.511680       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:35:23.511689       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:35:23.611763       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 00:35:23.611771       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 00:35:23.611779       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 00:35:23.611794       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c04914af48483e1f5424e4a3996004e5b6f4f59997b4688738bb2cc70ca3b333] <==
	E1212 00:35:15.314701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 00:35:15.314686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 00:35:15.315079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 00:35:15.314702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 00:35:15.316921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 00:35:15.316971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 00:35:15.316991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 00:35:15.317054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 00:35:15.317131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 00:35:15.317140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 00:35:15.317195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 00:35:15.317252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 00:35:15.317283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 00:35:16.204038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1212 00:35:16.211258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 00:35:16.215500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 00:35:16.257878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 00:35:16.257929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 00:35:16.324469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 00:35:16.325459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 00:35:16.366994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 00:35:16.374151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 00:35:16.409232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 00:35:16.513149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1212 00:35:18.611333       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 00:35:18 default-k8s-diff-port-079970 kubelet[1311]: E1212 00:35:18.953759    1311 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-079970\" already exists" pod="kube-system/etcd-default-k8s-diff-port-079970"
	Dec 12 00:35:18 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:18.985323    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-079970" podStartSLOduration=0.985301443 podStartE2EDuration="985.301443ms" podCreationTimestamp="2025-12-12 00:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:35:18.985198237 +0000 UTC m=+1.147122056" watchObservedRunningTime="2025-12-12 00:35:18.985301443 +0000 UTC m=+1.147225261"
	Dec 12 00:35:18 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:18.985436    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-079970" podStartSLOduration=0.985424809 podStartE2EDuration="985.424809ms" podCreationTimestamp="2025-12-12 00:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:35:18.976794879 +0000 UTC m=+1.138718697" watchObservedRunningTime="2025-12-12 00:35:18.985424809 +0000 UTC m=+1.147348626"
	Dec 12 00:35:19 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:19.001210    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-079970" podStartSLOduration=2.001197507 podStartE2EDuration="2.001197507s" podCreationTimestamp="2025-12-12 00:35:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:35:18.993776563 +0000 UTC m=+1.155700383" watchObservedRunningTime="2025-12-12 00:35:19.001197507 +0000 UTC m=+1.163121365"
	Dec 12 00:35:19 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:19.010070    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-079970" podStartSLOduration=1.010050461 podStartE2EDuration="1.010050461s" podCreationTimestamp="2025-12-12 00:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:35:19.001595124 +0000 UTC m=+1.163518922" watchObservedRunningTime="2025-12-12 00:35:19.010050461 +0000 UTC m=+1.171974281"
	Dec 12 00:35:22 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:22.231440    1311 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 12 00:35:22 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:22.232185    1311 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 12 00:35:22 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:22.934877    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3fc78112-a647-4ec7-b177-8e241a381425-kube-proxy\") pod \"kube-proxy-dp8fl\" (UID: \"3fc78112-a647-4ec7-b177-8e241a381425\") " pod="kube-system/kube-proxy-dp8fl"
	Dec 12 00:35:22 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:22.934915    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fc78112-a647-4ec7-b177-8e241a381425-lib-modules\") pod \"kube-proxy-dp8fl\" (UID: \"3fc78112-a647-4ec7-b177-8e241a381425\") " pod="kube-system/kube-proxy-dp8fl"
	Dec 12 00:35:22 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:22.934932    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/72d601ad-0ab2-4f0a-b325-4cc61f2a4c2c-cni-cfg\") pod \"kindnet-g8hsv\" (UID: \"72d601ad-0ab2-4f0a-b325-4cc61f2a4c2c\") " pod="kube-system/kindnet-g8hsv"
	Dec 12 00:35:22 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:22.934957    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t7jd\" (UniqueName: \"kubernetes.io/projected/3fc78112-a647-4ec7-b177-8e241a381425-kube-api-access-8t7jd\") pod \"kube-proxy-dp8fl\" (UID: \"3fc78112-a647-4ec7-b177-8e241a381425\") " pod="kube-system/kube-proxy-dp8fl"
	Dec 12 00:35:22 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:22.935049    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72d601ad-0ab2-4f0a-b325-4cc61f2a4c2c-xtables-lock\") pod \"kindnet-g8hsv\" (UID: \"72d601ad-0ab2-4f0a-b325-4cc61f2a4c2c\") " pod="kube-system/kindnet-g8hsv"
	Dec 12 00:35:22 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:22.935084    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72d601ad-0ab2-4f0a-b325-4cc61f2a4c2c-lib-modules\") pod \"kindnet-g8hsv\" (UID: \"72d601ad-0ab2-4f0a-b325-4cc61f2a4c2c\") " pod="kube-system/kindnet-g8hsv"
	Dec 12 00:35:22 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:22.935106    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fc78112-a647-4ec7-b177-8e241a381425-xtables-lock\") pod \"kube-proxy-dp8fl\" (UID: \"3fc78112-a647-4ec7-b177-8e241a381425\") " pod="kube-system/kube-proxy-dp8fl"
	Dec 12 00:35:22 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:22.935121    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwlbm\" (UniqueName: \"kubernetes.io/projected/72d601ad-0ab2-4f0a-b325-4cc61f2a4c2c-kube-api-access-wwlbm\") pod \"kindnet-g8hsv\" (UID: \"72d601ad-0ab2-4f0a-b325-4cc61f2a4c2c\") " pod="kube-system/kindnet-g8hsv"
	Dec 12 00:35:24 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:24.015781    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dp8fl" podStartSLOduration=2.015737659 podStartE2EDuration="2.015737659s" podCreationTimestamp="2025-12-12 00:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:35:24.013437599 +0000 UTC m=+6.175361440" watchObservedRunningTime="2025-12-12 00:35:24.015737659 +0000 UTC m=+6.177661477"
	Dec 12 00:35:24 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:24.016286    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-g8hsv" podStartSLOduration=2.01626777 podStartE2EDuration="2.01626777s" podCreationTimestamp="2025-12-12 00:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:35:23.980627898 +0000 UTC m=+6.142551721" watchObservedRunningTime="2025-12-12 00:35:24.01626777 +0000 UTC m=+6.178191592"
	Dec 12 00:35:33 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:33.999897    1311 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 12 00:35:34 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:34.213911    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a57868f1-738e-4b86-b87c-b7f54a01d3a3-config-volume\") pod \"coredns-66bc5c9577-jdmpv\" (UID: \"a57868f1-738e-4b86-b87c-b7f54a01d3a3\") " pod="kube-system/coredns-66bc5c9577-jdmpv"
	Dec 12 00:35:34 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:34.213978    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ea3bf36c-2744-415c-b0e4-e7f2142cf632-tmp\") pod \"storage-provisioner\" (UID: \"ea3bf36c-2744-415c-b0e4-e7f2142cf632\") " pod="kube-system/storage-provisioner"
	Dec 12 00:35:34 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:34.214004    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cd5j\" (UniqueName: \"kubernetes.io/projected/a57868f1-738e-4b86-b87c-b7f54a01d3a3-kube-api-access-4cd5j\") pod \"coredns-66bc5c9577-jdmpv\" (UID: \"a57868f1-738e-4b86-b87c-b7f54a01d3a3\") " pod="kube-system/coredns-66bc5c9577-jdmpv"
	Dec 12 00:35:34 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:34.214033    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx45l\" (UniqueName: \"kubernetes.io/projected/ea3bf36c-2744-415c-b0e4-e7f2142cf632-kube-api-access-zx45l\") pod \"storage-provisioner\" (UID: \"ea3bf36c-2744-415c-b0e4-e7f2142cf632\") " pod="kube-system/storage-provisioner"
	Dec 12 00:35:34 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:34.981467    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=10.981436026 podStartE2EDuration="10.981436026s" podCreationTimestamp="2025-12-12 00:35:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:35:34.98083794 +0000 UTC m=+17.142761759" watchObservedRunningTime="2025-12-12 00:35:34.981436026 +0000 UTC m=+17.143359847"
	Dec 12 00:35:37 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:37.003863    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jdmpv" podStartSLOduration=14.003835858 podStartE2EDuration="14.003835858s" podCreationTimestamp="2025-12-12 00:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:35:34.992257486 +0000 UTC m=+17.154181305" watchObservedRunningTime="2025-12-12 00:35:37.003835858 +0000 UTC m=+19.165759677"
	Dec 12 00:35:37 default-k8s-diff-port-079970 kubelet[1311]: I1212 00:35:37.032965    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6ckm\" (UniqueName: \"kubernetes.io/projected/dac1f5b6-efb8-48cc-90f3-3ba30e837989-kube-api-access-z6ckm\") pod \"busybox\" (UID: \"dac1f5b6-efb8-48cc-90f3-3ba30e837989\") " pod="default/busybox"
	
	
	==> storage-provisioner [fe69c736c9e0264b3e207a193ae576bdba909900bf7836eec89c3820c042f145] <==
	I1212 00:35:34.514121       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:35:34.523055       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:35:34.523114       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1212 00:35:34.525988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:34.532468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 00:35:34.532717       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:35:34.532914       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-079970_81958f1c-bd91-447f-b7c9-cb3213c5c303!
	I1212 00:35:34.532918       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"80539c40-e5b1-4dda-83b4-30c234eea46b", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-079970_81958f1c-bd91-447f-b7c9-cb3213c5c303 became leader
	W1212 00:35:34.535194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:34.540419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 00:35:34.633284       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-079970_81958f1c-bd91-447f-b7c9-cb3213c5c303!
	W1212 00:35:36.544170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:36.547977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:38.551046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:38.554663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:40.557920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:40.562012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:42.565330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:42.570796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:44.575409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:44.581130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:46.584430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:35:46.591402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-079970 -n default-k8s-diff-port-079970
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-079970 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-821472 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-821472 --alsologtostderr -v=1: exit status 80 (2.405468738s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-821472 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:35:46.990789  317449 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:46.990904  317449 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:46.990912  317449 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:46.990916  317449 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:46.991126  317449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:35:46.991525  317449 out.go:368] Setting JSON to false
	I1212 00:35:46.991551  317449 mustload.go:66] Loading cluster: newest-cni-821472
	I1212 00:35:46.992127  317449 config.go:182] Loaded profile config "newest-cni-821472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:46.992695  317449 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:47.015434  317449 host.go:66] Checking if "newest-cni-821472" exists ...
	I1212 00:35:47.015721  317449 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:47.078168  317449 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-12 00:35:47.067241937 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:35:47.078904  317449 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-821472 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1212 00:35:47.080617  317449 out.go:179] * Pausing node newest-cni-821472 ... 
	I1212 00:35:47.081728  317449 host.go:66] Checking if "newest-cni-821472" exists ...
	I1212 00:35:47.081964  317449 ssh_runner.go:195] Run: systemctl --version
	I1212 00:35:47.082009  317449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:47.100334  317449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:47.194583  317449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:35:47.206412  317449 pause.go:52] kubelet running: true
	I1212 00:35:47.206460  317449 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:35:47.390996  317449 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:35:47.391098  317449 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:35:47.482822  317449 cri.go:89] found id: "d8c8a20500f28fffb3f41f3a09dd55ab24ce16f7a38d70eeeeb5365bb4623cfc"
	I1212 00:35:47.482847  317449 cri.go:89] found id: "8d3a33b59ee230c7f70ddb449a74223e5f977edd7041277b7fdacac6441a8d80"
	I1212 00:35:47.482853  317449 cri.go:89] found id: "0a27b96d1bb7b73ed434ac959df9965a37373d6cec1e595449d9d42feb886a00"
	I1212 00:35:47.482858  317449 cri.go:89] found id: "9783b15121a958e88f6034a4a1e6706329e424058eba2e6f8347546407dabcf0"
	I1212 00:35:47.482863  317449 cri.go:89] found id: "174c7ee6c2a1a9165b183524a2e37c57f1d18763ab56511debb798c7745ad211"
	I1212 00:35:47.482868  317449 cri.go:89] found id: "d7a4fba82f0eabc43ef5bf98075ffc80893b0fb5b78287f6ce879a700f528fe8"
	I1212 00:35:47.482873  317449 cri.go:89] found id: ""
	I1212 00:35:47.482915  317449 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:35:47.496047  317449 retry.go:31] will retry after 166.69358ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:35:47Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:35:47.663578  317449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:35:47.676748  317449 pause.go:52] kubelet running: false
	I1212 00:35:47.676805  317449 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:35:47.876738  317449 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:35:47.876819  317449 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:35:47.946577  317449 cri.go:89] found id: "d8c8a20500f28fffb3f41f3a09dd55ab24ce16f7a38d70eeeeb5365bb4623cfc"
	I1212 00:35:47.946603  317449 cri.go:89] found id: "8d3a33b59ee230c7f70ddb449a74223e5f977edd7041277b7fdacac6441a8d80"
	I1212 00:35:47.946611  317449 cri.go:89] found id: "0a27b96d1bb7b73ed434ac959df9965a37373d6cec1e595449d9d42feb886a00"
	I1212 00:35:47.946617  317449 cri.go:89] found id: "9783b15121a958e88f6034a4a1e6706329e424058eba2e6f8347546407dabcf0"
	I1212 00:35:47.946622  317449 cri.go:89] found id: "174c7ee6c2a1a9165b183524a2e37c57f1d18763ab56511debb798c7745ad211"
	I1212 00:35:47.946627  317449 cri.go:89] found id: "d7a4fba82f0eabc43ef5bf98075ffc80893b0fb5b78287f6ce879a700f528fe8"
	I1212 00:35:47.946632  317449 cri.go:89] found id: ""
	I1212 00:35:47.946686  317449 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:35:47.959540  317449 retry.go:31] will retry after 473.134047ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:35:47Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:35:48.433199  317449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:35:48.448556  317449 pause.go:52] kubelet running: false
	I1212 00:35:48.448616  317449 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:35:48.594955  317449 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:35:48.595039  317449 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:35:48.682380  317449 cri.go:89] found id: "d8c8a20500f28fffb3f41f3a09dd55ab24ce16f7a38d70eeeeb5365bb4623cfc"
	I1212 00:35:48.682408  317449 cri.go:89] found id: "8d3a33b59ee230c7f70ddb449a74223e5f977edd7041277b7fdacac6441a8d80"
	I1212 00:35:48.682414  317449 cri.go:89] found id: "0a27b96d1bb7b73ed434ac959df9965a37373d6cec1e595449d9d42feb886a00"
	I1212 00:35:48.682419  317449 cri.go:89] found id: "9783b15121a958e88f6034a4a1e6706329e424058eba2e6f8347546407dabcf0"
	I1212 00:35:48.682424  317449 cri.go:89] found id: "174c7ee6c2a1a9165b183524a2e37c57f1d18763ab56511debb798c7745ad211"
	I1212 00:35:48.682437  317449 cri.go:89] found id: "d7a4fba82f0eabc43ef5bf98075ffc80893b0fb5b78287f6ce879a700f528fe8"
	I1212 00:35:48.682442  317449 cri.go:89] found id: ""
	I1212 00:35:48.682512  317449 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:35:48.694079  317449 retry.go:31] will retry after 368.32059ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:35:48Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:35:49.062681  317449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:35:49.077030  317449 pause.go:52] kubelet running: false
	I1212 00:35:49.077110  317449 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:35:49.221286  317449 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:35:49.221705  317449 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:35:49.301942  317449 cri.go:89] found id: "d8c8a20500f28fffb3f41f3a09dd55ab24ce16f7a38d70eeeeb5365bb4623cfc"
	I1212 00:35:49.301969  317449 cri.go:89] found id: "8d3a33b59ee230c7f70ddb449a74223e5f977edd7041277b7fdacac6441a8d80"
	I1212 00:35:49.301973  317449 cri.go:89] found id: "0a27b96d1bb7b73ed434ac959df9965a37373d6cec1e595449d9d42feb886a00"
	I1212 00:35:49.301976  317449 cri.go:89] found id: "9783b15121a958e88f6034a4a1e6706329e424058eba2e6f8347546407dabcf0"
	I1212 00:35:49.301979  317449 cri.go:89] found id: "174c7ee6c2a1a9165b183524a2e37c57f1d18763ab56511debb798c7745ad211"
	I1212 00:35:49.301983  317449 cri.go:89] found id: "d7a4fba82f0eabc43ef5bf98075ffc80893b0fb5b78287f6ce879a700f528fe8"
	I1212 00:35:49.301986  317449 cri.go:89] found id: ""
	I1212 00:35:49.302030  317449 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:35:49.316417  317449 out.go:203] 
	W1212 00:35:49.317549  317449 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:35:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:35:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:35:49.317566  317449 out.go:285] * 
	* 
	W1212 00:35:49.321464  317449 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:35:49.322723  317449 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-821472 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-821472
helpers_test.go:244: (dbg) docker inspect newest-cni-821472:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c",
	        "Created": "2025-12-12T00:35:06.315057819Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313884,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:35:36.843172239Z",
	            "FinishedAt": "2025-12-12T00:35:35.986255398Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c/hosts",
	        "LogPath": "/var/lib/docker/containers/a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c/a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c-json.log",
	        "Name": "/newest-cni-821472",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-821472:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-821472",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c",
	                "LowerDir": "/var/lib/docker/overlay2/94ca977471cdab415e18f1f9c4bacf7ebf57c40f63fdaf3b9ea3179b78961cc0-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94ca977471cdab415e18f1f9c4bacf7ebf57c40f63fdaf3b9ea3179b78961cc0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94ca977471cdab415e18f1f9c4bacf7ebf57c40f63fdaf3b9ea3179b78961cc0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94ca977471cdab415e18f1f9c4bacf7ebf57c40f63fdaf3b9ea3179b78961cc0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-821472",
	                "Source": "/var/lib/docker/volumes/newest-cni-821472/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-821472",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-821472",
	                "name.minikube.sigs.k8s.io": "newest-cni-821472",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "df071352e7984831ca61601948ce2e11f769b2c7271b827c8d47852f147a1395",
	            "SandboxKey": "/var/run/docker/netns/df071352e798",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-821472": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "575ab5e56d9527c2eb921586a6877a45ff36317f0e61f2f54d90ea7972b9e6b3",
	                    "EndpointID": "f0655008ef10c278bb8341fac0afe26cf7b5df33c1f374b05fa240197204c0e3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "da:f1:4b:5b:8f:ab",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-821472",
	                        "a4f2642ba7b2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-821472 -n newest-cni-821472
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-821472 -n newest-cni-821472: exit status 2 (323.696527ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-821472 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:35 UTC │
	│ image   │ old-k8s-version-743506 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p old-k8s-version-743506 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p old-k8s-version-743506                                                                                                                                                                                                                            │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ image   │ no-preload-675290 image list --format=json                                                                                                                                                                                                           │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p no-preload-675290 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p old-k8s-version-743506                                                                                                                                                                                                                            │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ delete  │ -p disable-driver-mounts-039387                                                                                                                                                                                                                      │ disable-driver-mounts-039387 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p default-k8s-diff-port-079970 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079970 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:35 UTC │
	│ delete  │ -p no-preload-675290                                                                                                                                                                                                                                 │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:35 UTC │
	│ delete  │ -p no-preload-675290                                                                                                                                                                                                                                 │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ start   │ -p newest-cni-821472 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image   │ embed-certs-858659 image list --format=json                                                                                                                                                                                                          │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ pause   │ -p embed-certs-858659 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-821472 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ delete  │ -p embed-certs-858659                                                                                                                                                                                                                                │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ stop    │ -p newest-cni-821472 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ delete  │ -p embed-certs-858659                                                                                                                                                                                                                                │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ start   │ -p auto-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-129742                  │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-821472 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ start   │ -p newest-cni-821472 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-079970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-079970 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ image   │ newest-cni-821472 image list --format=json                                                                                                                                                                                                           │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ pause   │ -p newest-cni-821472 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-079970 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-079970 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:35:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:35:36.615003  313657 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:36.615132  313657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:36.615144  313657 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:36.615150  313657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:36.615357  313657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:35:36.615821  313657 out.go:368] Setting JSON to false
	I1212 00:35:36.616985  313657 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4683,"bootTime":1765495054,"procs":426,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:35:36.617037  313657 start.go:143] virtualization: kvm guest
	I1212 00:35:36.619333  313657 out.go:179] * [newest-cni-821472] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:35:36.620570  313657 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:35:36.620576  313657 notify.go:221] Checking for updates...
	I1212 00:35:36.623071  313657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:35:36.624619  313657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:35:36.625809  313657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:35:36.627128  313657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:35:36.628277  313657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:35:36.629940  313657 config.go:182] Loaded profile config "newest-cni-821472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:36.630509  313657 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:35:36.653789  313657 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:35:36.653878  313657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:36.711943  313657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 00:35:36.702406624 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:35:36.712042  313657 docker.go:319] overlay module found
	I1212 00:35:36.713883  313657 out.go:179] * Using the docker driver based on existing profile
	I1212 00:35:36.715171  313657 start.go:309] selected driver: docker
	I1212 00:35:36.715189  313657 start.go:927] validating driver "docker" against &{Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:36.715263  313657 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:35:36.715863  313657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:36.770538  313657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 00:35:36.761701455 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:35:36.770807  313657 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 00:35:36.770828  313657 cni.go:84] Creating CNI manager for ""
	I1212 00:35:36.770886  313657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:36.770927  313657 start.go:353] cluster config:
	{Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:36.772612  313657 out.go:179] * Starting "newest-cni-821472" primary control-plane node in "newest-cni-821472" cluster
	I1212 00:35:36.773801  313657 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:35:36.774921  313657 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:35:36.775828  313657 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:36.775859  313657 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1212 00:35:36.775873  313657 cache.go:65] Caching tarball of preloaded images
	I1212 00:35:36.775931  313657 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:35:36.775948  313657 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:35:36.775956  313657 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 00:35:36.776059  313657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/config.json ...
	I1212 00:35:36.796954  313657 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:35:36.796975  313657 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:35:36.796994  313657 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:35:36.797025  313657 start.go:360] acquireMachinesLock for newest-cni-821472: {Name:mk1920b4afd40f764aad092389429d0db04875a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:35:36.797095  313657 start.go:364] duration metric: took 42.709µs to acquireMachinesLock for "newest-cni-821472"
	I1212 00:35:36.797116  313657 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:35:36.797133  313657 fix.go:54] fixHost starting: 
	I1212 00:35:36.797415  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:36.815009  313657 fix.go:112] recreateIfNeeded on newest-cni-821472: state=Stopped err=<nil>
	W1212 00:35:36.815030  313657 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:35:35.782567  300250 pod_ready.go:83] waiting for pod "kube-proxy-dp8fl" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:36.182605  300250 pod_ready.go:94] pod "kube-proxy-dp8fl" is "Ready"
	I1212 00:35:36.182630  300250 pod_ready.go:86] duration metric: took 400.039495ms for pod "kube-proxy-dp8fl" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:36.383406  300250 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-079970" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:36.782412  300250 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-079970" is "Ready"
	I1212 00:35:36.782432  300250 pod_ready.go:86] duration metric: took 399.007194ms for pod "kube-scheduler-default-k8s-diff-port-079970" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:36.782442  300250 pod_ready.go:40] duration metric: took 1.604170425s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:35:36.826188  300250 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 00:35:36.828786  300250 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-079970" cluster and "default" namespace by default
	I1212 00:35:32.667738  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 00:35:32.667810  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:32.667861  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:32.694152  263844 cri.go:89] found id: "e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29"
	I1212 00:35:32.694178  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:32.694185  263844 cri.go:89] found id: ""
	I1212 00:35:32.694195  263844 logs.go:282] 2 containers: [e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:32.694250  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:32.698030  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:32.701602  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:32.701662  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:32.726279  263844 cri.go:89] found id: ""
	I1212 00:35:32.726303  263844 logs.go:282] 0 containers: []
	W1212 00:35:32.726313  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:32.726321  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:32.726370  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:32.752014  263844 cri.go:89] found id: ""
	I1212 00:35:32.752038  263844 logs.go:282] 0 containers: []
	W1212 00:35:32.752046  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:32.752052  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:32.752101  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:32.778627  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:32.778649  263844 cri.go:89] found id: ""
	I1212 00:35:32.778659  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:32.778720  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:32.782297  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:32.782345  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:32.807291  263844 cri.go:89] found id: ""
	I1212 00:35:32.807319  263844 logs.go:282] 0 containers: []
	W1212 00:35:32.807329  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:32.807336  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:32.807379  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:32.831788  263844 cri.go:89] found id: "053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95"
	I1212 00:35:32.831807  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:32.831813  263844 cri.go:89] found id: ""
	I1212 00:35:32.831822  263844 logs.go:282] 2 containers: [053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:32.831874  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:32.835418  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:32.838729  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:32.838789  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:32.862931  263844 cri.go:89] found id: ""
	I1212 00:35:32.862951  263844 logs.go:282] 0 containers: []
	W1212 00:35:32.862958  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:32.862963  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:32.863004  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:32.887935  263844 cri.go:89] found id: ""
	I1212 00:35:32.887956  263844 logs.go:282] 0 containers: []
	W1212 00:35:32.887966  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:32.887983  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:32.887995  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:32.975983  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:32.976009  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:32.989497  263844 logs.go:123] Gathering logs for kube-controller-manager [053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95] ...
	I1212 00:35:32.989524  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95"
	I1212 00:35:33.013710  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:33.013734  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:33.038040  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:33.038062  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:33.065603  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:33.065624  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 00:35:34.587325  311720 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:35:34.610803  311720 cli_runner.go:164] Run: docker container inspect auto-129742 --format={{.State.Status}}
	I1212 00:35:34.631514  311720 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:35:34.631535  311720 kic_runner.go:114] Args: [docker exec --privileged auto-129742 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:35:34.678876  311720 cli_runner.go:164] Run: docker container inspect auto-129742 --format={{.State.Status}}
	I1212 00:35:34.698678  311720 machine.go:94] provisionDockerMachine start ...
	I1212 00:35:34.698797  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:34.720756  311720 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:34.721018  311720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 00:35:34.721035  311720 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:35:34.721699  311720 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40998->127.0.0.1:33098: read: connection reset by peer
	I1212 00:35:37.853402  311720 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-129742
	
	I1212 00:35:37.853432  311720 ubuntu.go:182] provisioning hostname "auto-129742"
	I1212 00:35:37.853555  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:37.876199  311720 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:37.876566  311720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 00:35:37.876594  311720 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-129742 && echo "auto-129742" | sudo tee /etc/hostname
	I1212 00:35:38.026989  311720 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-129742
	
	I1212 00:35:38.027083  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.046329  311720 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:38.046564  311720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 00:35:38.046581  311720 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-129742' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-129742/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-129742' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:35:38.176692  311720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:35:38.176731  311720 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:35:38.176755  311720 ubuntu.go:190] setting up certificates
	I1212 00:35:38.176765  311720 provision.go:84] configureAuth start
	I1212 00:35:38.176827  311720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-129742
	I1212 00:35:38.194501  311720 provision.go:143] copyHostCerts
	I1212 00:35:38.194561  311720 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:35:38.194575  311720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:35:38.194653  311720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:35:38.194789  311720 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:35:38.194801  311720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:35:38.194844  311720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:35:38.194942  311720 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:35:38.194954  311720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:35:38.194988  311720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:35:38.195075  311720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.auto-129742 san=[127.0.0.1 192.168.94.2 auto-129742 localhost minikube]
	I1212 00:35:38.258148  311720 provision.go:177] copyRemoteCerts
	I1212 00:35:38.258201  311720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:35:38.258239  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.276391  311720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa Username:docker}
	I1212 00:35:38.371020  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:35:38.389078  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:35:38.405517  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1212 00:35:38.421620  311720 provision.go:87] duration metric: took 244.837784ms to configureAuth
	I1212 00:35:38.421642  311720 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:35:38.421782  311720 config.go:182] Loaded profile config "auto-129742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:35:38.421872  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.439397  311720 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:38.439621  311720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 00:35:38.439637  311720 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:35:38.705109  311720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:35:38.705134  311720 machine.go:97] duration metric: took 4.006432612s to provisionDockerMachine
	I1212 00:35:38.705148  311720 client.go:176] duration metric: took 8.918452873s to LocalClient.Create
	I1212 00:35:38.705164  311720 start.go:167] duration metric: took 8.91850497s to libmachine.API.Create "auto-129742"
	I1212 00:35:38.705174  311720 start.go:293] postStartSetup for "auto-129742" (driver="docker")
	I1212 00:35:38.705195  311720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:35:38.705258  311720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:35:38.705308  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.722687  311720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa Username:docker}
	I1212 00:35:38.818543  311720 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:35:38.821829  311720 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:35:38.821860  311720 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:35:38.821871  311720 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:35:38.821922  311720 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:35:38.821994  311720 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:35:38.822078  311720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:35:38.829079  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:38.847819  311720 start.go:296] duration metric: took 142.630457ms for postStartSetup
	I1212 00:35:38.848136  311720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-129742
	I1212 00:35:38.866156  311720 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/config.json ...
	I1212 00:35:38.866447  311720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:35:38.866537  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.884046  311720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa Username:docker}
	I1212 00:35:38.974764  311720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:35:38.978927  311720 start.go:128] duration metric: took 9.194206502s to createHost
	I1212 00:35:38.978948  311720 start.go:83] releasing machines lock for "auto-129742", held for 9.194343995s
	I1212 00:35:38.979007  311720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-129742
	I1212 00:35:38.997762  311720 ssh_runner.go:195] Run: cat /version.json
	I1212 00:35:38.997818  311720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:35:38.997831  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.997895  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:39.016623  311720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa Username:docker}
	I1212 00:35:39.017350  311720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa Username:docker}
	I1212 00:35:39.161811  311720 ssh_runner.go:195] Run: systemctl --version
	I1212 00:35:39.167759  311720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:35:39.200505  311720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:35:39.204645  311720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:35:39.204707  311720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:35:39.228273  311720 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:35:39.228295  311720 start.go:496] detecting cgroup driver to use...
	I1212 00:35:39.228326  311720 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:35:39.228363  311720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:35:39.243212  311720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:35:39.254237  311720 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:35:39.254286  311720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:35:39.270799  311720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:35:39.286853  311720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:35:39.366180  311720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:35:39.449288  311720 docker.go:234] disabling docker service ...
	I1212 00:35:39.449359  311720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:35:39.466333  311720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:35:39.478905  311720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:35:39.560467  311720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:35:39.640214  311720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:35:39.651768  311720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:35:39.664769  311720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:35:39.664823  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.674135  311720 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:35:39.674187  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.682174  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.689970  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.698157  311720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:35:39.705528  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.713225  311720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.725377  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.733216  311720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:35:39.740029  311720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:35:39.746779  311720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:39.824567  311720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:35:39.964512  311720 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:35:39.964586  311720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:35:39.968445  311720 start.go:564] Will wait 60s for crictl version
	I1212 00:35:39.968513  311720 ssh_runner.go:195] Run: which crictl
	I1212 00:35:39.971962  311720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:35:39.995943  311720 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:35:39.996015  311720 ssh_runner.go:195] Run: crio --version
	I1212 00:35:40.023093  311720 ssh_runner.go:195] Run: crio --version
	I1212 00:35:40.050712  311720 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:35:40.051747  311720 cli_runner.go:164] Run: docker network inspect auto-129742 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:35:40.068586  311720 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1212 00:35:40.072316  311720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:40.082284  311720 kubeadm.go:884] updating cluster {Name:auto-129742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-129742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:35:40.082406  311720 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:35:40.082448  311720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:40.112765  311720 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:40.112784  311720 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:35:40.112821  311720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:40.137112  311720 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:40.137133  311720 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:35:40.137142  311720 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1212 00:35:40.137235  311720 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-129742 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-129742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:35:40.137334  311720 ssh_runner.go:195] Run: crio config
	I1212 00:35:40.182977  311720 cni.go:84] Creating CNI manager for ""
	I1212 00:35:40.183007  311720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:40.183035  311720 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:35:40.183068  311720 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-129742 NodeName:auto-129742 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:35:40.183243  311720 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-129742"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:35:40.183329  311720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:35:40.191388  311720 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:35:40.191460  311720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:35:40.199299  311720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1212 00:35:40.211054  311720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:35:40.225785  311720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1212 00:35:40.237527  311720 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:35:40.240865  311720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:40.250170  311720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:40.337022  311720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:40.355615  311720 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742 for IP: 192.168.94.2
	I1212 00:35:40.355638  311720 certs.go:195] generating shared ca certs ...
	I1212 00:35:40.355661  311720 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.355816  311720 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:35:40.355874  311720 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:35:40.355887  311720 certs.go:257] generating profile certs ...
	I1212 00:35:40.355951  311720 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.key
	I1212 00:35:40.355973  311720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.crt with IP's: []
	I1212 00:35:40.592669  311720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.crt ...
	I1212 00:35:40.592699  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.crt: {Name:mk76e1dd7172803fbf1cbffa40c75cb48c0a838a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.592881  311720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.key ...
	I1212 00:35:40.592899  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.key: {Name:mk1935c540fad051cff06def660ab58dd355b134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.593007  311720 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key.61554351
	I1212 00:35:40.593028  311720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt.61554351 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1212 00:35:40.639283  311720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt.61554351 ...
	I1212 00:35:40.639305  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt.61554351: {Name:mk9861694cb000aec3a0bd5942993e1c1f27a76b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.639440  311720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key.61554351 ...
	I1212 00:35:40.639453  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key.61554351: {Name:mkcce82003e0c847ecd0e8e701673ac85b4767d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.639537  311720 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt.61554351 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt
	I1212 00:35:40.639609  311720 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key.61554351 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key
	I1212 00:35:40.639661  311720 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.key
	I1212 00:35:40.639675  311720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.crt with IP's: []
	I1212 00:35:40.754363  311720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.crt ...
	I1212 00:35:40.754386  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.crt: {Name:mkf4fd1cbb46bff32b7eda0419dad9a557f8a6d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.754562  311720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.key ...
	I1212 00:35:40.754578  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.key: {Name:mke823010157e4fd78cfa585a52980304b928622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.754789  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:35:40.754836  311720 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:35:40.754851  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:35:40.754887  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:35:40.754924  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:35:40.754959  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:35:40.755014  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:40.755584  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:35:40.773207  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:35:40.790251  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:35:40.807658  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:35:40.823962  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1212 00:35:40.840940  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:35:40.857607  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:35:40.874553  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:35:40.891098  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:35:40.909544  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:35:40.925460  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:35:40.941738  311720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:35:40.953181  311720 ssh_runner.go:195] Run: openssl version
	I1212 00:35:40.958781  311720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:40.965400  311720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:35:40.972141  311720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:40.975535  311720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:40.975586  311720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:41.012011  311720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:35:41.019203  311720 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:35:41.026483  311720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:35:41.033388  311720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:35:41.040112  311720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:35:41.043591  311720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:35:41.043637  311720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:35:41.082194  311720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:35:41.090166  311720 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:35:41.097782  311720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:35:41.106091  311720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:35:41.113295  311720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:35:41.117095  311720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:35:41.117155  311720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:35:41.153797  311720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:41.161240  311720 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:41.168521  311720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:35:41.171803  311720 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:35:41.171846  311720 kubeadm.go:401] StartCluster: {Name:auto-129742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-129742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:41.171906  311720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:35:41.171939  311720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:35:41.197104  311720 cri.go:89] found id: ""
	I1212 00:35:41.197153  311720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:35:41.204439  311720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:35:41.211497  311720 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:35:41.211545  311720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:35:41.218528  311720 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:35:41.218544  311720 kubeadm.go:158] found existing configuration files:
	
	I1212 00:35:41.218580  311720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:35:41.225466  311720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:35:41.225526  311720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:35:41.232209  311720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:35:41.239422  311720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:35:41.239471  311720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:35:41.246617  311720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:35:41.253661  311720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:35:41.253706  311720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:35:41.260600  311720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:35:41.268103  311720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:35:41.268156  311720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:35:41.275464  311720 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:35:41.314774  311720 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 00:35:41.314819  311720 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:35:41.333053  311720 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:35:41.333128  311720 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 00:35:41.333187  311720 kubeadm.go:319] OS: Linux
	I1212 00:35:41.333281  311720 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:35:41.333353  311720 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:35:41.333411  311720 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:35:41.333538  311720 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:35:41.333609  311720 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:35:41.333670  311720 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:35:41.333731  311720 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:35:41.333787  311720 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 00:35:41.391557  311720 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:35:41.391703  311720 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:35:41.391828  311720 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:35:41.399245  311720 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:35:36.816773  313657 out.go:252] * Restarting existing docker container for "newest-cni-821472" ...
	I1212 00:35:36.816850  313657 cli_runner.go:164] Run: docker start newest-cni-821472
	I1212 00:35:37.080371  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:37.099121  313657 kic.go:430] container "newest-cni-821472" state is running.
	I1212 00:35:37.099565  313657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:37.118114  313657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/config.json ...
	I1212 00:35:37.118375  313657 machine.go:94] provisionDockerMachine start ...
	I1212 00:35:37.118451  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:37.136773  313657 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:37.137029  313657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 00:35:37.137045  313657 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:35:37.137774  313657 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53888->127.0.0.1:33103: read: connection reset by peer
	I1212 00:35:40.271768  313657 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-821472
	
	I1212 00:35:40.271794  313657 ubuntu.go:182] provisioning hostname "newest-cni-821472"
	I1212 00:35:40.271842  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:40.294413  313657 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:40.294663  313657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 00:35:40.294683  313657 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-821472 && echo "newest-cni-821472" | sudo tee /etc/hostname
	I1212 00:35:40.435333  313657 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-821472
	
	I1212 00:35:40.435409  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:40.455014  313657 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:40.455218  313657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 00:35:40.455234  313657 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-821472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-821472/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-821472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:35:40.586503  313657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:35:40.586531  313657 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:35:40.586569  313657 ubuntu.go:190] setting up certificates
	I1212 00:35:40.586583  313657 provision.go:84] configureAuth start
	I1212 00:35:40.586659  313657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:40.605310  313657 provision.go:143] copyHostCerts
	I1212 00:35:40.605389  313657 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:35:40.605409  313657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:35:40.605489  313657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:35:40.605616  313657 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:35:40.605630  313657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:35:40.605684  313657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:35:40.605770  313657 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:35:40.605779  313657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:35:40.605818  313657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:35:40.605935  313657 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.newest-cni-821472 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-821472]
	I1212 00:35:40.662347  313657 provision.go:177] copyRemoteCerts
	I1212 00:35:40.662411  313657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:35:40.662468  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:40.681297  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:40.777054  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:35:40.795024  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:35:40.811612  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:35:40.827828  313657 provision.go:87] duration metric: took 241.225014ms to configureAuth
	I1212 00:35:40.827853  313657 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:35:40.828024  313657 config.go:182] Loaded profile config "newest-cni-821472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:40.828117  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:40.846619  313657 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:40.846888  313657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 00:35:40.846924  313657 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:35:41.128685  313657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:35:41.128707  313657 machine.go:97] duration metric: took 4.01031546s to provisionDockerMachine
	I1212 00:35:41.128720  313657 start.go:293] postStartSetup for "newest-cni-821472" (driver="docker")
	I1212 00:35:41.128735  313657 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:35:41.128800  313657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:35:41.128844  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:41.146619  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:41.241954  313657 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:35:41.245283  313657 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:35:41.245314  313657 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:35:41.245327  313657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:35:41.245380  313657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:35:41.245456  313657 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:35:41.245579  313657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:35:41.252803  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:41.270439  313657 start.go:296] duration metric: took 141.702762ms for postStartSetup
	I1212 00:35:41.270533  313657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:35:41.270589  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:41.291253  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:41.385395  313657 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:35:41.389632  313657 fix.go:56] duration metric: took 4.592500888s for fixHost
	I1212 00:35:41.389658  313657 start.go:83] releasing machines lock for "newest-cni-821472", held for 4.592550322s
	I1212 00:35:41.389719  313657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:41.409111  313657 ssh_runner.go:195] Run: cat /version.json
	I1212 00:35:41.409194  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:41.409202  313657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:35:41.409264  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:41.426685  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:41.427390  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:41.573962  313657 ssh_runner.go:195] Run: systemctl --version
	I1212 00:35:41.580109  313657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:35:41.612958  313657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:35:41.617419  313657 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:35:41.617521  313657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:35:41.625155  313657 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:35:41.625174  313657 start.go:496] detecting cgroup driver to use...
	I1212 00:35:41.625206  313657 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:35:41.625270  313657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:35:41.638804  313657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:35:41.650436  313657 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:35:41.650494  313657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:35:41.663651  313657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:35:41.674712  313657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:35:41.753037  313657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:35:41.833338  313657 docker.go:234] disabling docker service ...
	I1212 00:35:41.833390  313657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:35:41.847161  313657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:35:41.858716  313657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:35:41.941176  313657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:35:42.022532  313657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:35:42.034350  313657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:35:42.047735  313657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:35:42.047788  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.055989  313657 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:35:42.056046  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.064168  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.072270  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.080522  313657 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:35:42.087789  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.095782  313657 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.103399  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.111579  313657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:35:42.118175  313657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:35:42.124818  313657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:42.202070  313657 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:35:42.349497  313657 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:35:42.349576  313657 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:35:42.353384  313657 start.go:564] Will wait 60s for crictl version
	I1212 00:35:42.353437  313657 ssh_runner.go:195] Run: which crictl
	I1212 00:35:42.356828  313657 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:35:42.380925  313657 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:35:42.380991  313657 ssh_runner.go:195] Run: crio --version
	I1212 00:35:42.408051  313657 ssh_runner.go:195] Run: crio --version
	I1212 00:35:42.437900  313657 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 00:35:42.439134  313657 cli_runner.go:164] Run: docker network inspect newest-cni-821472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:35:42.458296  313657 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 00:35:42.462111  313657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:42.473493  313657 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 00:35:42.474492  313657 kubeadm.go:884] updating cluster {Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:35:42.474641  313657 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:42.474706  313657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:42.503840  313657 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:42.503857  313657 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:35:42.503898  313657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:42.527719  313657 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:42.527738  313657 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:35:42.527746  313657 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 00:35:42.527850  313657 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-821472 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:35:42.527930  313657 ssh_runner.go:195] Run: crio config
	I1212 00:35:42.586235  313657 cni.go:84] Creating CNI manager for ""
	I1212 00:35:42.586263  313657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:42.586281  313657 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 00:35:42.586309  313657 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-821472 NodeName:newest-cni-821472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:35:42.586491  313657 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-821472"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:35:42.586563  313657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:35:42.595617  313657 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:35:42.595679  313657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:35:42.603149  313657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:35:42.615085  313657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:35:42.626755  313657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1212 00:35:42.638262  313657 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:35:42.641589  313657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:42.650971  313657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:42.728755  313657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:42.751540  313657 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472 for IP: 192.168.76.2
	I1212 00:35:42.751556  313657 certs.go:195] generating shared ca certs ...
	I1212 00:35:42.751573  313657 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:42.751733  313657 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:35:42.751802  313657 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:35:42.751819  313657 certs.go:257] generating profile certs ...
	I1212 00:35:42.751927  313657 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.key
	I1212 00:35:42.751999  313657 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key.e08375e0
	I1212 00:35:42.752048  313657 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key
	I1212 00:35:42.752192  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:35:42.752235  313657 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:35:42.752248  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:35:42.752283  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:35:42.752318  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:35:42.752360  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:35:42.752415  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:42.753053  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:35:42.770854  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:35:42.790078  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:35:42.807620  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:35:42.828730  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:35:42.849521  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:35:42.865729  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:35:42.881653  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:35:42.898013  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:35:42.914298  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:35:42.930406  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:35:42.947383  313657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:35:42.959592  313657 ssh_runner.go:195] Run: openssl version
	I1212 00:35:42.965251  313657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:42.971937  313657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:35:42.978671  313657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:42.982018  313657 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:42.982066  313657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:43.016314  313657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:35:43.023303  313657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:35:43.030374  313657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:35:43.037133  313657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:35:43.040548  313657 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:35:43.040580  313657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:35:43.074151  313657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:35:43.081008  313657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:35:43.087822  313657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:35:43.094643  313657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:35:43.098274  313657 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:35:43.098321  313657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:35:43.133554  313657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:43.141912  313657 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:35:43.146261  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:35:43.186934  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:35:43.225313  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:35:43.263428  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:35:43.309905  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:35:43.363587  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:35:43.422765  313657 kubeadm.go:401] StartCluster: {Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:43.422885  313657 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:35:43.422972  313657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:35:43.457007  313657 cri.go:89] found id: "0a27b96d1bb7b73ed434ac959df9965a37373d6cec1e595449d9d42feb886a00"
	I1212 00:35:43.457043  313657 cri.go:89] found id: "9783b15121a958e88f6034a4a1e6706329e424058eba2e6f8347546407dabcf0"
	I1212 00:35:43.457056  313657 cri.go:89] found id: "174c7ee6c2a1a9165b183524a2e37c57f1d18763ab56511debb798c7745ad211"
	I1212 00:35:43.457062  313657 cri.go:89] found id: "d7a4fba82f0eabc43ef5bf98075ffc80893b0fb5b78287f6ce879a700f528fe8"
	I1212 00:35:43.457066  313657 cri.go:89] found id: ""
	I1212 00:35:43.457148  313657 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 00:35:43.470031  313657 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:35:43Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:35:43.470109  313657 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:35:43.478723  313657 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:35:43.478750  313657 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:35:43.478793  313657 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:35:43.486064  313657 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:35:43.486980  313657 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-821472" does not appear in /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:35:43.487360  313657 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-10975/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-821472" cluster setting kubeconfig missing "newest-cni-821472" context setting]
	I1212 00:35:43.488053  313657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:43.489882  313657 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:35:43.497249  313657 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1212 00:35:43.497276  313657 kubeadm.go:602] duration metric: took 18.518865ms to restartPrimaryControlPlane
	I1212 00:35:43.497296  313657 kubeadm.go:403] duration metric: took 74.544874ms to StartCluster
	I1212 00:35:43.497311  313657 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:43.497364  313657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:35:43.498716  313657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:43.498945  313657 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:35:43.499041  313657 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:35:43.499128  313657 config.go:182] Loaded profile config "newest-cni-821472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:43.499138  313657 addons.go:70] Setting dashboard=true in profile "newest-cni-821472"
	I1212 00:35:43.499151  313657 addons.go:239] Setting addon dashboard=true in "newest-cni-821472"
	W1212 00:35:43.499160  313657 addons.go:248] addon dashboard should already be in state true
	I1212 00:35:43.499168  313657 addons.go:70] Setting default-storageclass=true in profile "newest-cni-821472"
	I1212 00:35:43.499128  313657 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-821472"
	I1212 00:35:43.499189  313657 host.go:66] Checking if "newest-cni-821472" exists ...
	I1212 00:35:43.499192  313657 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-821472"
	I1212 00:35:43.499191  313657 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-821472"
	W1212 00:35:43.499199  313657 addons.go:248] addon storage-provisioner should already be in state true
	I1212 00:35:43.499216  313657 host.go:66] Checking if "newest-cni-821472" exists ...
	I1212 00:35:43.499523  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:43.499674  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:43.499807  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:43.502446  313657 out.go:179] * Verifying Kubernetes components...
	I1212 00:35:43.503509  313657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:43.528175  313657 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 00:35:43.528255  313657 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:35:43.529215  313657 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:35:43.529234  313657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:35:43.529309  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:43.530562  313657 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 00:35:41.403306  311720 out.go:252]   - Generating certificates and keys ...
	I1212 00:35:41.403420  311720 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:35:41.403546  311720 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:35:41.610366  311720 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:35:41.904549  311720 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:35:42.236869  311720 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:35:42.432939  311720 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:35:42.670711  311720 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:35:42.670850  311720 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-129742 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 00:35:42.733388  311720 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:35:42.733562  311720 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-129742 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 00:35:42.890557  311720 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:35:43.189890  311720 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:35:43.432084  311720 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:35:43.432181  311720 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:35:44.496937  311720 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:35:43.532520  313657 addons.go:239] Setting addon default-storageclass=true in "newest-cni-821472"
	W1212 00:35:43.532542  313657 addons.go:248] addon default-storageclass should already be in state true
	I1212 00:35:43.532571  313657 host.go:66] Checking if "newest-cni-821472" exists ...
	I1212 00:35:43.533019  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:43.533188  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 00:35:43.533204  313657 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 00:35:43.533268  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:43.564658  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:43.566726  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:43.571698  313657 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:35:43.571744  313657 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:35:43.571811  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:43.596063  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:43.673895  313657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:43.689745  313657 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:35:43.689816  313657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:35:43.691914  313657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:35:43.694831  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 00:35:43.694849  313657 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 00:35:43.704985  313657 api_server.go:72] duration metric: took 206.002727ms to wait for apiserver process to appear ...
	I1212 00:35:43.705014  313657 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:35:43.705040  313657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:43.711963  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 00:35:43.711982  313657 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 00:35:43.713874  313657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:35:43.728659  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 00:35:43.728713  313657 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 00:35:43.745923  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 00:35:43.745946  313657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 00:35:43.764806  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 00:35:43.764899  313657 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 00:35:43.779853  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 00:35:43.779879  313657 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 00:35:43.794230  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 00:35:43.794256  313657 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 00:35:43.807520  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 00:35:43.807543  313657 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 00:35:43.823054  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:35:43.823074  313657 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 00:35:43.836827  313657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:35:44.593775  313657 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 00:35:44.593804  313657 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 00:35:44.593820  313657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:44.599187  313657 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 00:35:44.599214  313657 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 00:35:44.705950  313657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:44.710897  313657 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:35:44.710933  313657 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:35:45.196541  313657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.50459537s)
	I1212 00:35:45.196600  313657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.482698107s)
	I1212 00:35:45.196696  313657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.359839372s)
	I1212 00:35:45.198335  313657 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-821472 addons enable metrics-server
	
	I1212 00:35:45.205281  313657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:45.210437  313657 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:35:45.210462  313657 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:35:45.213577  313657 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1212 00:35:44.808888  311720 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:35:45.083910  311720 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:35:45.340764  311720 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:35:45.748663  311720 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:35:45.749566  311720 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:35:45.754425  311720 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:35:45.214712  313657 addons.go:530] duration metric: took 1.71567698s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1212 00:35:45.705550  313657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:45.710965  313657 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:35:45.710992  313657 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:35:46.206091  313657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:46.210732  313657 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 00:35:46.212014  313657 api_server.go:141] control plane version: v1.35.0-beta.0
	I1212 00:35:46.212039  313657 api_server.go:131] duration metric: took 2.507015976s to wait for apiserver health ...
	I1212 00:35:46.212049  313657 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:35:46.215707  313657 system_pods.go:59] 8 kube-system pods found
	I1212 00:35:46.215736  313657 system_pods.go:61] "coredns-7d764666f9-jh7k7" [47b3a0d4-8cf1-493d-8476-854bf16da9c2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1212 00:35:46.215747  313657 system_pods.go:61] "etcd-newest-cni-821472" [873a9831-a5b5-4c30-ab0d-03b2d4f01bc9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:35:46.215755  313657 system_pods.go:61] "kindnet-j79t9" [d76b2dd5-9a77-4340-8bbf-9c37dbb875ed] Running
	I1212 00:35:46.215764  313657 system_pods.go:61] "kube-apiserver-newest-cni-821472" [f133af68-91ae-4346-a167-9b8a88347f18] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:35:46.215775  313657 system_pods.go:61] "kube-controller-manager-newest-cni-821472" [549c410e-aef5-4f29-b928-488385df0998] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:35:46.215781  313657 system_pods.go:61] "kube-proxy-9kt8x" [5f73abae-7ab2-4110-a5e8-3623cf25bab2] Running
	I1212 00:35:46.215791  313657 system_pods.go:61] "kube-scheduler-newest-cni-821472" [4daba7f7-0db4-44d6-b143-0d9dba4b5048] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:35:46.215802  313657 system_pods.go:61] "storage-provisioner" [cd0e3704-d2bd-42bc-b3fb-5da6006b6e6d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1212 00:35:46.215809  313657 system_pods.go:74] duration metric: took 3.752795ms to wait for pod list to return data ...
	I1212 00:35:46.215821  313657 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:35:46.218587  313657 default_sa.go:45] found service account: "default"
	I1212 00:35:46.218607  313657 default_sa.go:55] duration metric: took 2.780354ms for default service account to be created ...
	I1212 00:35:46.218620  313657 kubeadm.go:587] duration metric: took 2.719646377s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 00:35:46.218647  313657 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:35:46.221555  313657 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:35:46.221582  313657 node_conditions.go:123] node cpu capacity is 8
	I1212 00:35:46.221599  313657 node_conditions.go:105] duration metric: took 2.945508ms to run NodePressure ...
	I1212 00:35:46.221613  313657 start.go:242] waiting for startup goroutines ...
	I1212 00:35:46.221624  313657 start.go:247] waiting for cluster config update ...
	I1212 00:35:46.221638  313657 start.go:256] writing updated cluster config ...
	I1212 00:35:46.221922  313657 ssh_runner.go:195] Run: rm -f paused
	I1212 00:35:46.280888  313657 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1212 00:35:46.282884  313657 out.go:179] * Done! kubectl is now configured to use "newest-cni-821472" cluster and "default" namespace by default
	I1212 00:35:43.120267  263844 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.054628138s)
	W1212 00:35:43.120298  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1212 00:35:43.120305  263844 logs.go:123] Gathering logs for kube-apiserver [e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29] ...
	I1212 00:35:43.120323  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29"
	I1212 00:35:43.151058  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:43.151080  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:43.183408  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:43.183439  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:43.209551  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:43.209574  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:45.765529  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:45.782543  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:53604->192.168.85.2:8443: read: connection reset by peer
	I1212 00:35:45.782606  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:45.782663  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:45.815980  263844 cri.go:89] found id: "e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29"
	I1212 00:35:45.816003  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:45.816008  263844 cri.go:89] found id: ""
	I1212 00:35:45.816017  263844 logs.go:282] 2 containers: [e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:45.816070  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:45.821600  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:45.826994  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:45.827062  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:45.858146  263844 cri.go:89] found id: ""
	I1212 00:35:45.858171  263844 logs.go:282] 0 containers: []
	W1212 00:35:45.858180  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:45.858187  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:45.858238  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:45.893267  263844 cri.go:89] found id: ""
	I1212 00:35:45.893292  263844 logs.go:282] 0 containers: []
	W1212 00:35:45.893303  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:45.893310  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:45.893364  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:45.925316  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:45.925335  263844 cri.go:89] found id: ""
	I1212 00:35:45.925343  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:45.925387  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:45.929750  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:45.929800  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:45.959869  263844 cri.go:89] found id: ""
	I1212 00:35:45.959899  263844 logs.go:282] 0 containers: []
	W1212 00:35:45.959912  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:45.959920  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:45.959974  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:45.992072  263844 cri.go:89] found id: "053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95"
	I1212 00:35:45.992097  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:45.992103  263844 cri.go:89] found id: ""
	I1212 00:35:45.992112  263844 logs.go:282] 2 containers: [053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:45.992179  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:45.996698  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:46.000777  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:46.000842  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:46.029239  263844 cri.go:89] found id: ""
	I1212 00:35:46.029265  263844 logs.go:282] 0 containers: []
	W1212 00:35:46.029274  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:46.029282  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:46.029339  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:46.058976  263844 cri.go:89] found id: ""
	I1212 00:35:46.058996  263844 logs.go:282] 0 containers: []
	W1212 00:35:46.059004  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:46.059017  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:46.059028  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:46.088287  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:46.088310  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:46.181165  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:46.181190  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:46.246764  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:46.246783  263844 logs.go:123] Gathering logs for kube-apiserver [e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29] ...
	I1212 00:35:46.246805  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29"
	I1212 00:35:46.284297  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:46.284328  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:46.327077  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:46.327107  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:46.403735  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:46.403765  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:46.420563  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:46.420590  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:46.452051  263844 logs.go:123] Gathering logs for kube-controller-manager [053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95] ...
	I1212 00:35:46.452085  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95"
	I1212 00:35:46.480039  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:46.480067  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:45.757580  311720 out.go:252]   - Booting up control plane ...
	I1212 00:35:45.757704  311720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:35:45.757812  311720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:35:45.757908  311720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:35:45.774609  311720 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:35:45.774752  311720 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:35:45.785788  311720 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:35:45.786358  311720 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:35:45.786431  311720 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:35:45.905842  311720 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:35:45.905992  311720 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:35:46.907469  311720 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00188174s
	I1212 00:35:46.911863  311720 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:35:46.911996  311720 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1212 00:35:46.912124  311720 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:35:46.912236  311720 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:35:48.605203  311720 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.693140323s
	I1212 00:35:49.019249  311720 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.107220081s
	
	
	==> CRI-O <==
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.12899612Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.132438373Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6fc69b2e-0617-4ed2-be06-0508b37415cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.134569947Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.134986943Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=49fe9bfe-8c4f-48c8-807e-e984d829ad70 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.135382893Z" level=info msg="Ran pod sandbox 463b6bb20138c3101447df0fbe90586d1d1fe39556c07ee2701aaf82dee81f8e with infra container: kube-system/kindnet-j79t9/POD" id=6fc69b2e-0617-4ed2-be06-0508b37415cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.136586327Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ec8b330e-0fde-417d-b9f2-c17ef2f440df name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.136979745Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.137777562Z" level=info msg="Ran pod sandbox 791a2d2f2cfe0da14e8e185791486531b569d08578f8dd6a9996b3a6a5c3a3c0 with infra container: kube-system/kube-proxy-9kt8x/POD" id=49fe9bfe-8c4f-48c8-807e-e984d829ad70 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.13796508Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=dac4d810-6778-4c59-8e22-196e0a966274 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.138777042Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b8c677d3-b341-4301-8d6a-8b73a9395612 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.139163156Z" level=info msg="Creating container: kube-system/kindnet-j79t9/kindnet-cni" id=a0c7abdb-697f-424b-8688-ffe766e7c730 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.139250084Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.139793232Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=892b9d75-a357-4cdb-ab3c-d8c8c3b1aa66 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.140689715Z" level=info msg="Creating container: kube-system/kube-proxy-9kt8x/kube-proxy" id=7b04894c-bf97-44a3-9395-7f0a02244652 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.140802362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.14413845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.144889514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.146883615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.147313219Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.172719934Z" level=info msg="Created container 8d3a33b59ee230c7f70ddb449a74223e5f977edd7041277b7fdacac6441a8d80: kube-system/kindnet-j79t9/kindnet-cni" id=a0c7abdb-697f-424b-8688-ffe766e7c730 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.173509079Z" level=info msg="Starting container: 8d3a33b59ee230c7f70ddb449a74223e5f977edd7041277b7fdacac6441a8d80" id=26fd390f-0532-4dcc-8a03-6d7074e3c193 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.176079878Z" level=info msg="Started container" PID=1053 containerID=8d3a33b59ee230c7f70ddb449a74223e5f977edd7041277b7fdacac6441a8d80 description=kube-system/kindnet-j79t9/kindnet-cni id=26fd390f-0532-4dcc-8a03-6d7074e3c193 name=/runtime.v1.RuntimeService/StartContainer sandboxID=463b6bb20138c3101447df0fbe90586d1d1fe39556c07ee2701aaf82dee81f8e
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.179514137Z" level=info msg="Created container d8c8a20500f28fffb3f41f3a09dd55ab24ce16f7a38d70eeeeb5365bb4623cfc: kube-system/kube-proxy-9kt8x/kube-proxy" id=7b04894c-bf97-44a3-9395-7f0a02244652 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.180240947Z" level=info msg="Starting container: d8c8a20500f28fffb3f41f3a09dd55ab24ce16f7a38d70eeeeb5365bb4623cfc" id=c9b8ac6a-fe51-48f2-9a1b-364534edfbcf name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.183669336Z" level=info msg="Started container" PID=1054 containerID=d8c8a20500f28fffb3f41f3a09dd55ab24ce16f7a38d70eeeeb5365bb4623cfc description=kube-system/kube-proxy-9kt8x/kube-proxy id=c9b8ac6a-fe51-48f2-9a1b-364534edfbcf name=/runtime.v1.RuntimeService/StartContainer sandboxID=791a2d2f2cfe0da14e8e185791486531b569d08578f8dd6a9996b3a6a5c3a3c0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d8c8a20500f28       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   5 seconds ago       Running             kube-proxy                1                   791a2d2f2cfe0       kube-proxy-9kt8x                            kube-system
	8d3a33b59ee23       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   463b6bb20138c       kindnet-j79t9                               kube-system
	0a27b96d1bb7b       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   7 seconds ago       Running             kube-controller-manager   1                   cd2919eb1b73a       kube-controller-manager-newest-cni-821472   kube-system
	9783b15121a95       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   7 seconds ago       Running             etcd                      1                   cb73983681586       etcd-newest-cni-821472                      kube-system
	174c7ee6c2a1a       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   7 seconds ago       Running             kube-apiserver            1                   659df6c91238b       kube-apiserver-newest-cni-821472            kube-system
	d7a4fba82f0ea       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   7 seconds ago       Running             kube-scheduler            1                   80291f35ffb94       kube-scheduler-newest-cni-821472            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-821472
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-821472
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=newest-cni-821472
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_35_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:35:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-821472
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:35:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:35:44 +0000   Fri, 12 Dec 2025 00:35:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:35:44 +0000   Fri, 12 Dec 2025 00:35:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:35:44 +0000   Fri, 12 Dec 2025 00:35:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 12 Dec 2025 00:35:44 +0000   Fri, 12 Dec 2025 00:35:16 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-821472
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                aee33282-724a-47cd-8807-62e94d0c0413
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-821472                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-j79t9                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-821472             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-821472    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-9kt8x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-821472             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node newest-cni-821472 event: Registered Node newest-cni-821472 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-821472 event: Registered Node newest-cni-821472 in Controller
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [9783b15121a958e88f6034a4a1e6706329e424058eba2e6f8347546407dabcf0] <==
	{"level":"warn","ts":"2025-12-12T00:35:43.975874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:43.984520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:43.991863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:43.998242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.004738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.010903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.017013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.022987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.029331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.046679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.054240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.061668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.068371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.078173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.085003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.092567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.099260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.106700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.112954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.135694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.138769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.145827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.156332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.163044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.202089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40288","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:35:50 up  1:18,  0 user,  load average: 4.61, 3.38, 2.10
	Linux newest-cni-821472 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8d3a33b59ee230c7f70ddb449a74223e5f977edd7041277b7fdacac6441a8d80] <==
	I1212 00:35:45.320148       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 00:35:45.320369       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1212 00:35:45.410745       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:35:45.410776       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:35:45.410797       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:35:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:35:45.520780       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:35:45.520874       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:35:45.520927       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:35:45.610842       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 00:35:45.821161       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:35:45.821600       1 metrics.go:72] Registering metrics
	I1212 00:35:45.821720       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [174c7ee6c2a1a9165b183524a2e37c57f1d18763ab56511debb798c7745ad211] <==
	I1212 00:35:44.667299       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:44.667348       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 00:35:44.666614       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 00:35:44.666833       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 00:35:44.669366       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 00:35:44.669703       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1212 00:35:44.669897       1 aggregator.go:187] initial CRD sync complete...
	I1212 00:35:44.670237       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 00:35:44.670250       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 00:35:44.670259       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:35:44.676139       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 00:35:44.694042       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:35:44.694348       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:35:44.922921       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:35:44.969862       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 00:35:44.996719       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:35:45.015923       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:35:45.024661       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:35:45.063160       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.67.72"}
	I1212 00:35:45.073057       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.98.97"}
	I1212 00:35:45.571215       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1212 00:35:48.301616       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 00:35:48.351656       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:35:48.403049       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 00:35:48.504213       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0a27b96d1bb7b73ed434ac959df9965a37373d6cec1e595449d9d42feb886a00] <==
	I1212 00:35:47.823813       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.824109       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.824305       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.824438       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.824467       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.824545       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1212 00:35:47.824614       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-821472"
	I1212 00:35:47.824664       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1212 00:35:47.824682       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.824702       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.824856       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.824894       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.825247       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.825309       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.864680       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.878012       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.878095       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.880276       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.880295       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.880876       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.882720       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.917680       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.935102       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.935125       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1212 00:35:47.935132       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [d8c8a20500f28fffb3f41f3a09dd55ab24ce16f7a38d70eeeeb5365bb4623cfc] <==
	I1212 00:35:45.223907       1 server_linux.go:53] "Using iptables proxy"
	I1212 00:35:45.278958       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:35:45.379100       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:45.379138       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1212 00:35:45.379283       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:35:45.396955       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:35:45.397013       1 server_linux.go:136] "Using iptables Proxier"
	I1212 00:35:45.401825       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:35:45.402222       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1212 00:35:45.402242       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:35:45.403541       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:35:45.403569       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:35:45.403619       1 config.go:200] "Starting service config controller"
	I1212 00:35:45.403622       1 config.go:309] "Starting node config controller"
	I1212 00:35:45.403634       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:35:45.403626       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:35:45.403656       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:35:45.403662       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:35:45.504313       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 00:35:45.504347       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 00:35:45.504358       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 00:35:45.504376       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d7a4fba82f0eabc43ef5bf98075ffc80893b0fb5b78287f6ce879a700f528fe8] <==
	I1212 00:35:43.767224       1 serving.go:386] Generated self-signed cert in-memory
	W1212 00:35:44.621193       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 00:35:44.621228       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 00:35:44.621241       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 00:35:44.621250       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 00:35:44.655653       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1212 00:35:44.655702       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:35:44.658401       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:35:44.658433       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:35:44.658571       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 00:35:44.659003       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 00:35:44.759567       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.756740     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-821472"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: E1212 00:35:44.762791     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-821472\" already exists" pod="kube-system/kube-scheduler-newest-cni-821472"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.818221     673 apiserver.go:52] "Watching apiserver"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.824810     673 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.856248     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-821472"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.856622     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-821472"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: E1212 00:35:44.856949     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-821472" containerName="etcd"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: E1212 00:35:44.857630     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-821472" containerName="kube-controller-manager"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: E1212 00:35:44.865633     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-821472\" already exists" pod="kube-system/kube-scheduler-newest-cni-821472"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: E1212 00:35:44.865797     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-821472" containerName="kube-scheduler"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: E1212 00:35:44.866982     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-821472\" already exists" pod="kube-system/kube-apiserver-newest-cni-821472"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: E1212 00:35:44.867064     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-821472" containerName="kube-apiserver"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.920215     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f73abae-7ab2-4110-a5e8-3623cf25bab2-lib-modules\") pod \"kube-proxy-9kt8x\" (UID: \"5f73abae-7ab2-4110-a5e8-3623cf25bab2\") " pod="kube-system/kube-proxy-9kt8x"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.920264     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d76b2dd5-9a77-4340-8bbf-9c37dbb875ed-cni-cfg\") pod \"kindnet-j79t9\" (UID: \"d76b2dd5-9a77-4340-8bbf-9c37dbb875ed\") " pod="kube-system/kindnet-j79t9"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.920286     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d76b2dd5-9a77-4340-8bbf-9c37dbb875ed-lib-modules\") pod \"kindnet-j79t9\" (UID: \"d76b2dd5-9a77-4340-8bbf-9c37dbb875ed\") " pod="kube-system/kindnet-j79t9"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.920372     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d76b2dd5-9a77-4340-8bbf-9c37dbb875ed-xtables-lock\") pod \"kindnet-j79t9\" (UID: \"d76b2dd5-9a77-4340-8bbf-9c37dbb875ed\") " pod="kube-system/kindnet-j79t9"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.920432     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f73abae-7ab2-4110-a5e8-3623cf25bab2-xtables-lock\") pod \"kube-proxy-9kt8x\" (UID: \"5f73abae-7ab2-4110-a5e8-3623cf25bab2\") " pod="kube-system/kube-proxy-9kt8x"
	Dec 12 00:35:45 newest-cni-821472 kubelet[673]: E1212 00:35:45.862603     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-821472" containerName="kube-apiserver"
	Dec 12 00:35:45 newest-cni-821472 kubelet[673]: E1212 00:35:45.862768     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-821472" containerName="etcd"
	Dec 12 00:35:45 newest-cni-821472 kubelet[673]: E1212 00:35:45.863132     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-821472" containerName="kube-scheduler"
	Dec 12 00:35:47 newest-cni-821472 kubelet[673]: E1212 00:35:47.343973     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-821472" containerName="etcd"
	Dec 12 00:35:47 newest-cni-821472 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 00:35:47 newest-cni-821472 kubelet[673]: I1212 00:35:47.367048     673 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 12 00:35:47 newest-cni-821472 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 00:35:47 newest-cni-821472 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-821472 -n newest-cni-821472
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-821472 -n newest-cni-821472: exit status 2 (338.049063ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-821472 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-jh7k7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-wcppg kubernetes-dashboard-b84665fb8-68vlp
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-821472 describe pod coredns-7d764666f9-jh7k7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-wcppg kubernetes-dashboard-b84665fb8-68vlp
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-821472 describe pod coredns-7d764666f9-jh7k7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-wcppg kubernetes-dashboard-b84665fb8-68vlp: exit status 1 (60.26448ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jh7k7" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-wcppg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-68vlp" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-821472 describe pod coredns-7d764666f9-jh7k7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-wcppg kubernetes-dashboard-b84665fb8-68vlp: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-821472
helpers_test.go:244: (dbg) docker inspect newest-cni-821472:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c",
	        "Created": "2025-12-12T00:35:06.315057819Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313884,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:35:36.843172239Z",
	            "FinishedAt": "2025-12-12T00:35:35.986255398Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c/hosts",
	        "LogPath": "/var/lib/docker/containers/a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c/a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c-json.log",
	        "Name": "/newest-cni-821472",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-821472:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-821472",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4f2642ba7b21590ccedb3d612a30ad9f162c77cfb4ac804cc164192bd4df06c",
	                "LowerDir": "/var/lib/docker/overlay2/94ca977471cdab415e18f1f9c4bacf7ebf57c40f63fdaf3b9ea3179b78961cc0-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94ca977471cdab415e18f1f9c4bacf7ebf57c40f63fdaf3b9ea3179b78961cc0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94ca977471cdab415e18f1f9c4bacf7ebf57c40f63fdaf3b9ea3179b78961cc0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94ca977471cdab415e18f1f9c4bacf7ebf57c40f63fdaf3b9ea3179b78961cc0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-821472",
	                "Source": "/var/lib/docker/volumes/newest-cni-821472/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-821472",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-821472",
	                "name.minikube.sigs.k8s.io": "newest-cni-821472",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "df071352e7984831ca61601948ce2e11f769b2c7271b827c8d47852f147a1395",
	            "SandboxKey": "/var/run/docker/netns/df071352e798",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-821472": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "575ab5e56d9527c2eb921586a6877a45ff36317f0e61f2f54d90ea7972b9e6b3",
	                    "EndpointID": "f0655008ef10c278bb8341fac0afe26cf7b5df33c1f374b05fa240197204c0e3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "da:f1:4b:5b:8f:ab",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-821472",
	                        "a4f2642ba7b2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-821472 -n newest-cni-821472
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-821472 -n newest-cni-821472: exit status 2 (311.879047ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-821472 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:35 UTC │
	│ image   │ old-k8s-version-743506 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p old-k8s-version-743506 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p old-k8s-version-743506                                                                                                                                                                                                                            │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ image   │ no-preload-675290 image list --format=json                                                                                                                                                                                                           │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ pause   │ -p no-preload-675290 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │                     │
	│ delete  │ -p old-k8s-version-743506                                                                                                                                                                                                                            │ old-k8s-version-743506       │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ delete  │ -p disable-driver-mounts-039387                                                                                                                                                                                                                      │ disable-driver-mounts-039387 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:34 UTC │
	│ start   │ -p default-k8s-diff-port-079970 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079970 │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:35 UTC │
	│ delete  │ -p no-preload-675290                                                                                                                                                                                                                                 │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:34 UTC │ 12 Dec 25 00:35 UTC │
	│ delete  │ -p no-preload-675290                                                                                                                                                                                                                                 │ no-preload-675290            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ start   │ -p newest-cni-821472 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image   │ embed-certs-858659 image list --format=json                                                                                                                                                                                                          │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ pause   │ -p embed-certs-858659 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-821472 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ delete  │ -p embed-certs-858659                                                                                                                                                                                                                                │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ stop    │ -p newest-cni-821472 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ delete  │ -p embed-certs-858659                                                                                                                                                                                                                                │ embed-certs-858659           │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ start   │ -p auto-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-129742                  │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-821472 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ start   │ -p newest-cni-821472 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-079970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-079970 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ image   │ newest-cni-821472 image list --format=json                                                                                                                                                                                                           │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ pause   │ -p newest-cni-821472 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-821472            │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-079970 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-079970 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:35:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:35:36.615003  313657 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:36.615132  313657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:36.615144  313657 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:36.615150  313657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:36.615357  313657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:35:36.615821  313657 out.go:368] Setting JSON to false
	I1212 00:35:36.616985  313657 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4683,"bootTime":1765495054,"procs":426,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:35:36.617037  313657 start.go:143] virtualization: kvm guest
	I1212 00:35:36.619333  313657 out.go:179] * [newest-cni-821472] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:35:36.620570  313657 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:35:36.620576  313657 notify.go:221] Checking for updates...
	I1212 00:35:36.623071  313657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:35:36.624619  313657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:35:36.625809  313657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:35:36.627128  313657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:35:36.628277  313657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:35:36.629940  313657 config.go:182] Loaded profile config "newest-cni-821472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:36.630509  313657 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:35:36.653789  313657 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:35:36.653878  313657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:36.711943  313657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 00:35:36.702406624 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:35:36.712042  313657 docker.go:319] overlay module found
	I1212 00:35:36.713883  313657 out.go:179] * Using the docker driver based on existing profile
	I1212 00:35:36.715171  313657 start.go:309] selected driver: docker
	I1212 00:35:36.715189  313657 start.go:927] validating driver "docker" against &{Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:36.715263  313657 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:35:36.715863  313657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:36.770538  313657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 00:35:36.761701455 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:35:36.770807  313657 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 00:35:36.770828  313657 cni.go:84] Creating CNI manager for ""
	I1212 00:35:36.770886  313657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:36.770927  313657 start.go:353] cluster config:
	{Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:36.772612  313657 out.go:179] * Starting "newest-cni-821472" primary control-plane node in "newest-cni-821472" cluster
	I1212 00:35:36.773801  313657 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:35:36.774921  313657 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:35:36.775828  313657 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:36.775859  313657 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1212 00:35:36.775873  313657 cache.go:65] Caching tarball of preloaded images
	I1212 00:35:36.775931  313657 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:35:36.775948  313657 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:35:36.775956  313657 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 00:35:36.776059  313657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/config.json ...
	I1212 00:35:36.796954  313657 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:35:36.796975  313657 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:35:36.796994  313657 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:35:36.797025  313657 start.go:360] acquireMachinesLock for newest-cni-821472: {Name:mk1920b4afd40f764aad092389429d0db04875a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:35:36.797095  313657 start.go:364] duration metric: took 42.709µs to acquireMachinesLock for "newest-cni-821472"
	I1212 00:35:36.797116  313657 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:35:36.797133  313657 fix.go:54] fixHost starting: 
	I1212 00:35:36.797415  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:36.815009  313657 fix.go:112] recreateIfNeeded on newest-cni-821472: state=Stopped err=<nil>
	W1212 00:35:36.815030  313657 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:35:35.782567  300250 pod_ready.go:83] waiting for pod "kube-proxy-dp8fl" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:36.182605  300250 pod_ready.go:94] pod "kube-proxy-dp8fl" is "Ready"
	I1212 00:35:36.182630  300250 pod_ready.go:86] duration metric: took 400.039495ms for pod "kube-proxy-dp8fl" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:36.383406  300250 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-079970" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:36.782412  300250 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-079970" is "Ready"
	I1212 00:35:36.782432  300250 pod_ready.go:86] duration metric: took 399.007194ms for pod "kube-scheduler-default-k8s-diff-port-079970" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:35:36.782442  300250 pod_ready.go:40] duration metric: took 1.604170425s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:35:36.826188  300250 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 00:35:36.828786  300250 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-079970" cluster and "default" namespace by default
	I1212 00:35:32.667738  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 00:35:32.667810  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:32.667861  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:32.694152  263844 cri.go:89] found id: "e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29"
	I1212 00:35:32.694178  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:32.694185  263844 cri.go:89] found id: ""
	I1212 00:35:32.694195  263844 logs.go:282] 2 containers: [e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:32.694250  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:32.698030  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:32.701602  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:32.701662  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:32.726279  263844 cri.go:89] found id: ""
	I1212 00:35:32.726303  263844 logs.go:282] 0 containers: []
	W1212 00:35:32.726313  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:32.726321  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:32.726370  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:32.752014  263844 cri.go:89] found id: ""
	I1212 00:35:32.752038  263844 logs.go:282] 0 containers: []
	W1212 00:35:32.752046  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:32.752052  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:32.752101  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:32.778627  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:32.778649  263844 cri.go:89] found id: ""
	I1212 00:35:32.778659  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:32.778720  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:32.782297  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:32.782345  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:32.807291  263844 cri.go:89] found id: ""
	I1212 00:35:32.807319  263844 logs.go:282] 0 containers: []
	W1212 00:35:32.807329  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:32.807336  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:32.807379  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:32.831788  263844 cri.go:89] found id: "053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95"
	I1212 00:35:32.831807  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:32.831813  263844 cri.go:89] found id: ""
	I1212 00:35:32.831822  263844 logs.go:282] 2 containers: [053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:32.831874  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:32.835418  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:32.838729  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:32.838789  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:32.862931  263844 cri.go:89] found id: ""
	I1212 00:35:32.862951  263844 logs.go:282] 0 containers: []
	W1212 00:35:32.862958  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:32.862963  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:32.863004  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:32.887935  263844 cri.go:89] found id: ""
	I1212 00:35:32.887956  263844 logs.go:282] 0 containers: []
	W1212 00:35:32.887966  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:32.887983  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:32.887995  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:32.975983  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:32.976009  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:32.989497  263844 logs.go:123] Gathering logs for kube-controller-manager [053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95] ...
	I1212 00:35:32.989524  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95"
	I1212 00:35:33.013710  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:33.013734  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:33.038040  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:33.038062  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:33.065603  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:33.065624  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 00:35:34.587325  311720 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:35:34.610803  311720 cli_runner.go:164] Run: docker container inspect auto-129742 --format={{.State.Status}}
	I1212 00:35:34.631514  311720 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:35:34.631535  311720 kic_runner.go:114] Args: [docker exec --privileged auto-129742 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:35:34.678876  311720 cli_runner.go:164] Run: docker container inspect auto-129742 --format={{.State.Status}}
	I1212 00:35:34.698678  311720 machine.go:94] provisionDockerMachine start ...
	I1212 00:35:34.698797  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:34.720756  311720 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:34.721018  311720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 00:35:34.721035  311720 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:35:34.721699  311720 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40998->127.0.0.1:33098: read: connection reset by peer
	I1212 00:35:37.853402  311720 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-129742
	
	I1212 00:35:37.853432  311720 ubuntu.go:182] provisioning hostname "auto-129742"
	I1212 00:35:37.853555  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:37.876199  311720 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:37.876566  311720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 00:35:37.876594  311720 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-129742 && echo "auto-129742" | sudo tee /etc/hostname
	I1212 00:35:38.026989  311720 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-129742
	
	I1212 00:35:38.027083  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.046329  311720 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:38.046564  311720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 00:35:38.046581  311720 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-129742' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-129742/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-129742' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:35:38.176692  311720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:35:38.176731  311720 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:35:38.176755  311720 ubuntu.go:190] setting up certificates
	I1212 00:35:38.176765  311720 provision.go:84] configureAuth start
	I1212 00:35:38.176827  311720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-129742
	I1212 00:35:38.194501  311720 provision.go:143] copyHostCerts
	I1212 00:35:38.194561  311720 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:35:38.194575  311720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:35:38.194653  311720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:35:38.194789  311720 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:35:38.194801  311720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:35:38.194844  311720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:35:38.194942  311720 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:35:38.194954  311720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:35:38.194988  311720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:35:38.195075  311720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.auto-129742 san=[127.0.0.1 192.168.94.2 auto-129742 localhost minikube]
	I1212 00:35:38.258148  311720 provision.go:177] copyRemoteCerts
	I1212 00:35:38.258201  311720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:35:38.258239  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.276391  311720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa Username:docker}
	I1212 00:35:38.371020  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:35:38.389078  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:35:38.405517  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1212 00:35:38.421620  311720 provision.go:87] duration metric: took 244.837784ms to configureAuth
	I1212 00:35:38.421642  311720 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:35:38.421782  311720 config.go:182] Loaded profile config "auto-129742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:35:38.421872  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.439397  311720 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:38.439621  311720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 00:35:38.439637  311720 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:35:38.705109  311720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:35:38.705134  311720 machine.go:97] duration metric: took 4.006432612s to provisionDockerMachine
	I1212 00:35:38.705148  311720 client.go:176] duration metric: took 8.918452873s to LocalClient.Create
	I1212 00:35:38.705164  311720 start.go:167] duration metric: took 8.91850497s to libmachine.API.Create "auto-129742"
	I1212 00:35:38.705174  311720 start.go:293] postStartSetup for "auto-129742" (driver="docker")
	I1212 00:35:38.705195  311720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:35:38.705258  311720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:35:38.705308  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.722687  311720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa Username:docker}
	I1212 00:35:38.818543  311720 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:35:38.821829  311720 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:35:38.821860  311720 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:35:38.821871  311720 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:35:38.821922  311720 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:35:38.821994  311720 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:35:38.822078  311720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:35:38.829079  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:38.847819  311720 start.go:296] duration metric: took 142.630457ms for postStartSetup
	I1212 00:35:38.848136  311720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-129742
	I1212 00:35:38.866156  311720 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/config.json ...
	I1212 00:35:38.866447  311720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:35:38.866537  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.884046  311720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa Username:docker}
	I1212 00:35:38.974764  311720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:35:38.978927  311720 start.go:128] duration metric: took 9.194206502s to createHost
	I1212 00:35:38.978948  311720 start.go:83] releasing machines lock for "auto-129742", held for 9.194343995s
	I1212 00:35:38.979007  311720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-129742
	I1212 00:35:38.997762  311720 ssh_runner.go:195] Run: cat /version.json
	I1212 00:35:38.997818  311720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:35:38.997831  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:38.997895  311720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129742
	I1212 00:35:39.016623  311720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa Username:docker}
	I1212 00:35:39.017350  311720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/auto-129742/id_rsa Username:docker}
	I1212 00:35:39.161811  311720 ssh_runner.go:195] Run: systemctl --version
	I1212 00:35:39.167759  311720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:35:39.200505  311720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:35:39.204645  311720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:35:39.204707  311720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:35:39.228273  311720 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:35:39.228295  311720 start.go:496] detecting cgroup driver to use...
	I1212 00:35:39.228326  311720 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:35:39.228363  311720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:35:39.243212  311720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:35:39.254237  311720 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:35:39.254286  311720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:35:39.270799  311720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:35:39.286853  311720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:35:39.366180  311720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:35:39.449288  311720 docker.go:234] disabling docker service ...
	I1212 00:35:39.449359  311720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:35:39.466333  311720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:35:39.478905  311720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:35:39.560467  311720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:35:39.640214  311720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:35:39.651768  311720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:35:39.664769  311720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:35:39.664823  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.674135  311720 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:35:39.674187  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.682174  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.689970  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.698157  311720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:35:39.705528  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.713225  311720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.725377  311720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:39.733216  311720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:35:39.740029  311720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:35:39.746779  311720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:39.824567  311720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:35:39.964512  311720 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:35:39.964586  311720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:35:39.968445  311720 start.go:564] Will wait 60s for crictl version
	I1212 00:35:39.968513  311720 ssh_runner.go:195] Run: which crictl
	I1212 00:35:39.971962  311720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:35:39.995943  311720 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:35:39.996015  311720 ssh_runner.go:195] Run: crio --version
	I1212 00:35:40.023093  311720 ssh_runner.go:195] Run: crio --version
	I1212 00:35:40.050712  311720 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:35:40.051747  311720 cli_runner.go:164] Run: docker network inspect auto-129742 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:35:40.068586  311720 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1212 00:35:40.072316  311720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:40.082284  311720 kubeadm.go:884] updating cluster {Name:auto-129742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-129742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:35:40.082406  311720 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:35:40.082448  311720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:40.112765  311720 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:40.112784  311720 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:35:40.112821  311720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:40.137112  311720 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:40.137133  311720 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:35:40.137142  311720 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1212 00:35:40.137235  311720 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-129742 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-129742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:35:40.137334  311720 ssh_runner.go:195] Run: crio config
	I1212 00:35:40.182977  311720 cni.go:84] Creating CNI manager for ""
	I1212 00:35:40.183007  311720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:40.183035  311720 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:35:40.183068  311720 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-129742 NodeName:auto-129742 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:35:40.183243  311720 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-129742"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:35:40.183329  311720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:35:40.191388  311720 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:35:40.191460  311720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:35:40.199299  311720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1212 00:35:40.211054  311720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:35:40.225785  311720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1212 00:35:40.237527  311720 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:35:40.240865  311720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:40.250170  311720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:40.337022  311720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:40.355615  311720 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742 for IP: 192.168.94.2
	I1212 00:35:40.355638  311720 certs.go:195] generating shared ca certs ...
	I1212 00:35:40.355661  311720 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.355816  311720 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:35:40.355874  311720 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:35:40.355887  311720 certs.go:257] generating profile certs ...
	I1212 00:35:40.355951  311720 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.key
	I1212 00:35:40.355973  311720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.crt with IP's: []
	I1212 00:35:40.592669  311720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.crt ...
	I1212 00:35:40.592699  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.crt: {Name:mk76e1dd7172803fbf1cbffa40c75cb48c0a838a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.592881  311720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.key ...
	I1212 00:35:40.592899  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/client.key: {Name:mk1935c540fad051cff06def660ab58dd355b134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.593007  311720 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key.61554351
	I1212 00:35:40.593028  311720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt.61554351 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1212 00:35:40.639283  311720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt.61554351 ...
	I1212 00:35:40.639305  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt.61554351: {Name:mk9861694cb000aec3a0bd5942993e1c1f27a76b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.639440  311720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key.61554351 ...
	I1212 00:35:40.639453  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key.61554351: {Name:mkcce82003e0c847ecd0e8e701673ac85b4767d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.639537  311720 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt.61554351 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt
	I1212 00:35:40.639609  311720 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key.61554351 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key
	I1212 00:35:40.639661  311720 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.key
	I1212 00:35:40.639675  311720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.crt with IP's: []
	I1212 00:35:40.754363  311720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.crt ...
	I1212 00:35:40.754386  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.crt: {Name:mkf4fd1cbb46bff32b7eda0419dad9a557f8a6d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.754562  311720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.key ...
	I1212 00:35:40.754578  311720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.key: {Name:mke823010157e4fd78cfa585a52980304b928622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:40.754789  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:35:40.754836  311720 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:35:40.754851  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:35:40.754887  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:35:40.754924  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:35:40.754959  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:35:40.755014  311720 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:40.755584  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:35:40.773207  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:35:40.790251  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:35:40.807658  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:35:40.823962  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1212 00:35:40.840940  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:35:40.857607  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:35:40.874553  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/auto-129742/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:35:40.891098  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:35:40.909544  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:35:40.925460  311720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:35:40.941738  311720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:35:40.953181  311720 ssh_runner.go:195] Run: openssl version
	I1212 00:35:40.958781  311720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:40.965400  311720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:35:40.972141  311720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:40.975535  311720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:40.975586  311720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:41.012011  311720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:35:41.019203  311720 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:35:41.026483  311720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:35:41.033388  311720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:35:41.040112  311720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:35:41.043591  311720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:35:41.043637  311720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:35:41.082194  311720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:35:41.090166  311720 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:35:41.097782  311720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:35:41.106091  311720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:35:41.113295  311720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:35:41.117095  311720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:35:41.117155  311720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:35:41.153797  311720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:41.161240  311720 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:41.168521  311720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:35:41.171803  311720 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:35:41.171846  311720 kubeadm.go:401] StartCluster: {Name:auto-129742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-129742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:41.171906  311720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:35:41.171939  311720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:35:41.197104  311720 cri.go:89] found id: ""
	I1212 00:35:41.197153  311720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:35:41.204439  311720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:35:41.211497  311720 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:35:41.211545  311720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:35:41.218528  311720 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:35:41.218544  311720 kubeadm.go:158] found existing configuration files:
	
	I1212 00:35:41.218580  311720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:35:41.225466  311720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:35:41.225526  311720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:35:41.232209  311720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:35:41.239422  311720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:35:41.239471  311720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:35:41.246617  311720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:35:41.253661  311720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:35:41.253706  311720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:35:41.260600  311720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:35:41.268103  311720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:35:41.268156  311720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:35:41.275464  311720 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:35:41.314774  311720 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 00:35:41.314819  311720 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:35:41.333053  311720 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:35:41.333128  311720 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 00:35:41.333187  311720 kubeadm.go:319] OS: Linux
	I1212 00:35:41.333281  311720 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:35:41.333353  311720 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:35:41.333411  311720 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:35:41.333538  311720 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:35:41.333609  311720 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:35:41.333670  311720 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:35:41.333731  311720 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:35:41.333787  311720 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 00:35:41.391557  311720 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:35:41.391703  311720 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:35:41.391828  311720 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:35:41.399245  311720 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:35:36.816773  313657 out.go:252] * Restarting existing docker container for "newest-cni-821472" ...
	I1212 00:35:36.816850  313657 cli_runner.go:164] Run: docker start newest-cni-821472
	I1212 00:35:37.080371  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:37.099121  313657 kic.go:430] container "newest-cni-821472" state is running.
	I1212 00:35:37.099565  313657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:37.118114  313657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/config.json ...
	I1212 00:35:37.118375  313657 machine.go:94] provisionDockerMachine start ...
	I1212 00:35:37.118451  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:37.136773  313657 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:37.137029  313657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 00:35:37.137045  313657 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:35:37.137774  313657 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53888->127.0.0.1:33103: read: connection reset by peer
	I1212 00:35:40.271768  313657 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-821472
	
	I1212 00:35:40.271794  313657 ubuntu.go:182] provisioning hostname "newest-cni-821472"
	I1212 00:35:40.271842  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:40.294413  313657 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:40.294663  313657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 00:35:40.294683  313657 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-821472 && echo "newest-cni-821472" | sudo tee /etc/hostname
	I1212 00:35:40.435333  313657 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-821472
	
	I1212 00:35:40.435409  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:40.455014  313657 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:40.455218  313657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 00:35:40.455234  313657 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-821472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-821472/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-821472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:35:40.586503  313657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:35:40.586531  313657 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:35:40.586569  313657 ubuntu.go:190] setting up certificates
	I1212 00:35:40.586583  313657 provision.go:84] configureAuth start
	I1212 00:35:40.586659  313657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:40.605310  313657 provision.go:143] copyHostCerts
	I1212 00:35:40.605389  313657 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:35:40.605409  313657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:35:40.605489  313657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:35:40.605616  313657 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:35:40.605630  313657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:35:40.605684  313657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:35:40.605770  313657 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:35:40.605779  313657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:35:40.605818  313657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:35:40.605935  313657 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.newest-cni-821472 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-821472]
	I1212 00:35:40.662347  313657 provision.go:177] copyRemoteCerts
	I1212 00:35:40.662411  313657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:35:40.662468  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:40.681297  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:40.777054  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:35:40.795024  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:35:40.811612  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:35:40.827828  313657 provision.go:87] duration metric: took 241.225014ms to configureAuth
	I1212 00:35:40.827853  313657 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:35:40.828024  313657 config.go:182] Loaded profile config "newest-cni-821472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:40.828117  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:40.846619  313657 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:40.846888  313657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 00:35:40.846924  313657 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:35:41.128685  313657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:35:41.128707  313657 machine.go:97] duration metric: took 4.01031546s to provisionDockerMachine
	I1212 00:35:41.128720  313657 start.go:293] postStartSetup for "newest-cni-821472" (driver="docker")
	I1212 00:35:41.128735  313657 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:35:41.128800  313657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:35:41.128844  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:41.146619  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:41.241954  313657 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:35:41.245283  313657 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:35:41.245314  313657 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:35:41.245327  313657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:35:41.245380  313657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:35:41.245456  313657 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:35:41.245579  313657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:35:41.252803  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:41.270439  313657 start.go:296] duration metric: took 141.702762ms for postStartSetup
	I1212 00:35:41.270533  313657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:35:41.270589  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:41.291253  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:41.385395  313657 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:35:41.389632  313657 fix.go:56] duration metric: took 4.592500888s for fixHost
	I1212 00:35:41.389658  313657 start.go:83] releasing machines lock for "newest-cni-821472", held for 4.592550322s
	I1212 00:35:41.389719  313657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-821472
	I1212 00:35:41.409111  313657 ssh_runner.go:195] Run: cat /version.json
	I1212 00:35:41.409194  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:41.409202  313657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:35:41.409264  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:41.426685  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:41.427390  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:41.573962  313657 ssh_runner.go:195] Run: systemctl --version
	I1212 00:35:41.580109  313657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:35:41.612958  313657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:35:41.617419  313657 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:35:41.617521  313657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:35:41.625155  313657 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:35:41.625174  313657 start.go:496] detecting cgroup driver to use...
	I1212 00:35:41.625206  313657 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:35:41.625270  313657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:35:41.638804  313657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:35:41.650436  313657 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:35:41.650494  313657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:35:41.663651  313657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:35:41.674712  313657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:35:41.753037  313657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:35:41.833338  313657 docker.go:234] disabling docker service ...
	I1212 00:35:41.833390  313657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:35:41.847161  313657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:35:41.858716  313657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:35:41.941176  313657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:35:42.022532  313657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:35:42.034350  313657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:35:42.047735  313657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:35:42.047788  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.055989  313657 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:35:42.056046  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.064168  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.072270  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.080522  313657 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:35:42.087789  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.095782  313657 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.103399  313657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:35:42.111579  313657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:35:42.118175  313657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:35:42.124818  313657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:42.202070  313657 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:35:42.349497  313657 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:35:42.349576  313657 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:35:42.353384  313657 start.go:564] Will wait 60s for crictl version
	I1212 00:35:42.353437  313657 ssh_runner.go:195] Run: which crictl
	I1212 00:35:42.356828  313657 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:35:42.380925  313657 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:35:42.380991  313657 ssh_runner.go:195] Run: crio --version
	I1212 00:35:42.408051  313657 ssh_runner.go:195] Run: crio --version
	I1212 00:35:42.437900  313657 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 00:35:42.439134  313657 cli_runner.go:164] Run: docker network inspect newest-cni-821472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:35:42.458296  313657 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 00:35:42.462111  313657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:42.473493  313657 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 00:35:42.474492  313657 kubeadm.go:884] updating cluster {Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:35:42.474641  313657 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:42.474706  313657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:42.503840  313657 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:42.503857  313657 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:35:42.503898  313657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:35:42.527719  313657 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:35:42.527738  313657 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:35:42.527746  313657 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 00:35:42.527850  313657 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-821472 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:35:42.527930  313657 ssh_runner.go:195] Run: crio config
	I1212 00:35:42.586235  313657 cni.go:84] Creating CNI manager for ""
	I1212 00:35:42.586263  313657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:42.586281  313657 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 00:35:42.586309  313657 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-821472 NodeName:newest-cni-821472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:35:42.586491  313657 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-821472"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:35:42.586563  313657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:35:42.595617  313657 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:35:42.595679  313657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:35:42.603149  313657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:35:42.615085  313657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:35:42.626755  313657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1212 00:35:42.638262  313657 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:35:42.641589  313657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:35:42.650971  313657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:42.728755  313657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:42.751540  313657 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472 for IP: 192.168.76.2
	I1212 00:35:42.751556  313657 certs.go:195] generating shared ca certs ...
	I1212 00:35:42.751573  313657 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:42.751733  313657 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:35:42.751802  313657 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:35:42.751819  313657 certs.go:257] generating profile certs ...
	I1212 00:35:42.751927  313657 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/client.key
	I1212 00:35:42.751999  313657 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key.e08375e0
	I1212 00:35:42.752048  313657 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key
	I1212 00:35:42.752192  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:35:42.752235  313657 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:35:42.752248  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:35:42.752283  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:35:42.752318  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:35:42.752360  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:35:42.752415  313657 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:35:42.753053  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:35:42.770854  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:35:42.790078  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:35:42.807620  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:35:42.828730  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:35:42.849521  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:35:42.865729  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:35:42.881653  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/newest-cni-821472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:35:42.898013  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:35:42.914298  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:35:42.930406  313657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:35:42.947383  313657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:35:42.959592  313657 ssh_runner.go:195] Run: openssl version
	I1212 00:35:42.965251  313657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:42.971937  313657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:35:42.978671  313657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:42.982018  313657 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:42.982066  313657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:35:43.016314  313657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:35:43.023303  313657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:35:43.030374  313657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:35:43.037133  313657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:35:43.040548  313657 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:35:43.040580  313657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:35:43.074151  313657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:35:43.081008  313657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:35:43.087822  313657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:35:43.094643  313657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:35:43.098274  313657 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:35:43.098321  313657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:35:43.133554  313657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:35:43.141912  313657 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:35:43.146261  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:35:43.186934  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:35:43.225313  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:35:43.263428  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:35:43.309905  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:35:43.363587  313657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:35:43.422765  313657 kubeadm.go:401] StartCluster: {Name:newest-cni-821472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-821472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:43.422885  313657 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:35:43.422972  313657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:35:43.457007  313657 cri.go:89] found id: "0a27b96d1bb7b73ed434ac959df9965a37373d6cec1e595449d9d42feb886a00"
	I1212 00:35:43.457043  313657 cri.go:89] found id: "9783b15121a958e88f6034a4a1e6706329e424058eba2e6f8347546407dabcf0"
	I1212 00:35:43.457056  313657 cri.go:89] found id: "174c7ee6c2a1a9165b183524a2e37c57f1d18763ab56511debb798c7745ad211"
	I1212 00:35:43.457062  313657 cri.go:89] found id: "d7a4fba82f0eabc43ef5bf98075ffc80893b0fb5b78287f6ce879a700f528fe8"
	I1212 00:35:43.457066  313657 cri.go:89] found id: ""
	I1212 00:35:43.457148  313657 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 00:35:43.470031  313657 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:35:43Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:35:43.470109  313657 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:35:43.478723  313657 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:35:43.478750  313657 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:35:43.478793  313657 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:35:43.486064  313657 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:35:43.486980  313657 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-821472" does not appear in /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:35:43.487360  313657 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-10975/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-821472" cluster setting kubeconfig missing "newest-cni-821472" context setting]
	I1212 00:35:43.488053  313657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:43.489882  313657 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:35:43.497249  313657 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1212 00:35:43.497276  313657 kubeadm.go:602] duration metric: took 18.518865ms to restartPrimaryControlPlane
	I1212 00:35:43.497296  313657 kubeadm.go:403] duration metric: took 74.544874ms to StartCluster
	I1212 00:35:43.497311  313657 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:43.497364  313657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:35:43.498716  313657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:43.498945  313657 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:35:43.499041  313657 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:35:43.499128  313657 config.go:182] Loaded profile config "newest-cni-821472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:43.499138  313657 addons.go:70] Setting dashboard=true in profile "newest-cni-821472"
	I1212 00:35:43.499151  313657 addons.go:239] Setting addon dashboard=true in "newest-cni-821472"
	W1212 00:35:43.499160  313657 addons.go:248] addon dashboard should already be in state true
	I1212 00:35:43.499168  313657 addons.go:70] Setting default-storageclass=true in profile "newest-cni-821472"
	I1212 00:35:43.499128  313657 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-821472"
	I1212 00:35:43.499189  313657 host.go:66] Checking if "newest-cni-821472" exists ...
	I1212 00:35:43.499192  313657 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-821472"
	I1212 00:35:43.499191  313657 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-821472"
	W1212 00:35:43.499199  313657 addons.go:248] addon storage-provisioner should already be in state true
	I1212 00:35:43.499216  313657 host.go:66] Checking if "newest-cni-821472" exists ...
	I1212 00:35:43.499523  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:43.499674  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:43.499807  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:43.502446  313657 out.go:179] * Verifying Kubernetes components...
	I1212 00:35:43.503509  313657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:35:43.528175  313657 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 00:35:43.528255  313657 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:35:43.529215  313657 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:35:43.529234  313657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:35:43.529309  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:43.530562  313657 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 00:35:41.403306  311720 out.go:252]   - Generating certificates and keys ...
	I1212 00:35:41.403420  311720 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:35:41.403546  311720 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:35:41.610366  311720 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:35:41.904549  311720 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:35:42.236869  311720 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:35:42.432939  311720 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:35:42.670711  311720 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:35:42.670850  311720 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-129742 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 00:35:42.733388  311720 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:35:42.733562  311720 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-129742 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 00:35:42.890557  311720 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:35:43.189890  311720 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:35:43.432084  311720 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:35:43.432181  311720 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:35:44.496937  311720 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:35:43.532520  313657 addons.go:239] Setting addon default-storageclass=true in "newest-cni-821472"
	W1212 00:35:43.532542  313657 addons.go:248] addon default-storageclass should already be in state true
	I1212 00:35:43.532571  313657 host.go:66] Checking if "newest-cni-821472" exists ...
	I1212 00:35:43.533019  313657 cli_runner.go:164] Run: docker container inspect newest-cni-821472 --format={{.State.Status}}
	I1212 00:35:43.533188  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 00:35:43.533204  313657 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 00:35:43.533268  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:43.564658  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:43.566726  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:43.571698  313657 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:35:43.571744  313657 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:35:43.571811  313657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-821472
	I1212 00:35:43.596063  313657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/newest-cni-821472/id_rsa Username:docker}
	I1212 00:35:43.673895  313657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:35:43.689745  313657 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:35:43.689816  313657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:35:43.691914  313657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:35:43.694831  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 00:35:43.694849  313657 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 00:35:43.704985  313657 api_server.go:72] duration metric: took 206.002727ms to wait for apiserver process to appear ...
	I1212 00:35:43.705014  313657 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:35:43.705040  313657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:43.711963  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 00:35:43.711982  313657 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 00:35:43.713874  313657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:35:43.728659  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 00:35:43.728713  313657 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 00:35:43.745923  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 00:35:43.745946  313657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 00:35:43.764806  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 00:35:43.764899  313657 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 00:35:43.779853  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 00:35:43.779879  313657 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 00:35:43.794230  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 00:35:43.794256  313657 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 00:35:43.807520  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 00:35:43.807543  313657 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 00:35:43.823054  313657 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:35:43.823074  313657 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 00:35:43.836827  313657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 00:35:44.593775  313657 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 00:35:44.593804  313657 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 00:35:44.593820  313657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:44.599187  313657 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 00:35:44.599214  313657 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 00:35:44.705950  313657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:44.710897  313657 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:35:44.710933  313657 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:35:45.196541  313657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.50459537s)
	I1212 00:35:45.196600  313657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.482698107s)
	I1212 00:35:45.196696  313657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.359839372s)
	I1212 00:35:45.198335  313657 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-821472 addons enable metrics-server
	
	I1212 00:35:45.205281  313657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:45.210437  313657 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:35:45.210462  313657 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:35:45.213577  313657 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1212 00:35:44.808888  311720 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:35:45.083910  311720 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:35:45.340764  311720 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:35:45.748663  311720 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:35:45.749566  311720 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:35:45.754425  311720 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:35:45.214712  313657 addons.go:530] duration metric: took 1.71567698s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1212 00:35:45.705550  313657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:45.710965  313657 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:35:45.710992  313657 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:35:46.206091  313657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 00:35:46.210732  313657 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 00:35:46.212014  313657 api_server.go:141] control plane version: v1.35.0-beta.0
	I1212 00:35:46.212039  313657 api_server.go:131] duration metric: took 2.507015976s to wait for apiserver health ...
	I1212 00:35:46.212049  313657 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:35:46.215707  313657 system_pods.go:59] 8 kube-system pods found
	I1212 00:35:46.215736  313657 system_pods.go:61] "coredns-7d764666f9-jh7k7" [47b3a0d4-8cf1-493d-8476-854bf16da9c2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1212 00:35:46.215747  313657 system_pods.go:61] "etcd-newest-cni-821472" [873a9831-a5b5-4c30-ab0d-03b2d4f01bc9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:35:46.215755  313657 system_pods.go:61] "kindnet-j79t9" [d76b2dd5-9a77-4340-8bbf-9c37dbb875ed] Running
	I1212 00:35:46.215764  313657 system_pods.go:61] "kube-apiserver-newest-cni-821472" [f133af68-91ae-4346-a167-9b8a88347f18] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:35:46.215775  313657 system_pods.go:61] "kube-controller-manager-newest-cni-821472" [549c410e-aef5-4f29-b928-488385df0998] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:35:46.215781  313657 system_pods.go:61] "kube-proxy-9kt8x" [5f73abae-7ab2-4110-a5e8-3623cf25bab2] Running
	I1212 00:35:46.215791  313657 system_pods.go:61] "kube-scheduler-newest-cni-821472" [4daba7f7-0db4-44d6-b143-0d9dba4b5048] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:35:46.215802  313657 system_pods.go:61] "storage-provisioner" [cd0e3704-d2bd-42bc-b3fb-5da6006b6e6d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1212 00:35:46.215809  313657 system_pods.go:74] duration metric: took 3.752795ms to wait for pod list to return data ...
	I1212 00:35:46.215821  313657 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:35:46.218587  313657 default_sa.go:45] found service account: "default"
	I1212 00:35:46.218607  313657 default_sa.go:55] duration metric: took 2.780354ms for default service account to be created ...
	I1212 00:35:46.218620  313657 kubeadm.go:587] duration metric: took 2.719646377s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 00:35:46.218647  313657 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:35:46.221555  313657 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 00:35:46.221582  313657 node_conditions.go:123] node cpu capacity is 8
	I1212 00:35:46.221599  313657 node_conditions.go:105] duration metric: took 2.945508ms to run NodePressure ...
	I1212 00:35:46.221613  313657 start.go:242] waiting for startup goroutines ...
	I1212 00:35:46.221624  313657 start.go:247] waiting for cluster config update ...
	I1212 00:35:46.221638  313657 start.go:256] writing updated cluster config ...
	I1212 00:35:46.221922  313657 ssh_runner.go:195] Run: rm -f paused
	I1212 00:35:46.280888  313657 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1212 00:35:46.282884  313657 out.go:179] * Done! kubectl is now configured to use "newest-cni-821472" cluster and "default" namespace by default
	I1212 00:35:43.120267  263844 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.054628138s)
	W1212 00:35:43.120298  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1212 00:35:43.120305  263844 logs.go:123] Gathering logs for kube-apiserver [e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29] ...
	I1212 00:35:43.120323  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29"
	I1212 00:35:43.151058  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:43.151080  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:43.183408  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:43.183439  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:43.209551  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:43.209574  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:45.765529  263844 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 00:35:45.782543  263844 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:53604->192.168.85.2:8443: read: connection reset by peer
	I1212 00:35:45.782606  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:35:45.782663  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:35:45.815980  263844 cri.go:89] found id: "e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29"
	I1212 00:35:45.816003  263844 cri.go:89] found id: "de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:45.816008  263844 cri.go:89] found id: ""
	I1212 00:35:45.816017  263844 logs.go:282] 2 containers: [e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db]
	I1212 00:35:45.816070  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:45.821600  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:45.826994  263844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:35:45.827062  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:35:45.858146  263844 cri.go:89] found id: ""
	I1212 00:35:45.858171  263844 logs.go:282] 0 containers: []
	W1212 00:35:45.858180  263844 logs.go:284] No container was found matching "etcd"
	I1212 00:35:45.858187  263844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:35:45.858238  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:35:45.893267  263844 cri.go:89] found id: ""
	I1212 00:35:45.893292  263844 logs.go:282] 0 containers: []
	W1212 00:35:45.893303  263844 logs.go:284] No container was found matching "coredns"
	I1212 00:35:45.893310  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:35:45.893364  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:35:45.925316  263844 cri.go:89] found id: "5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:45.925335  263844 cri.go:89] found id: ""
	I1212 00:35:45.925343  263844 logs.go:282] 1 containers: [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a]
	I1212 00:35:45.925387  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:45.929750  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:35:45.929800  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:35:45.959869  263844 cri.go:89] found id: ""
	I1212 00:35:45.959899  263844 logs.go:282] 0 containers: []
	W1212 00:35:45.959912  263844 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:35:45.959920  263844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:35:45.959974  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:35:45.992072  263844 cri.go:89] found id: "053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95"
	I1212 00:35:45.992097  263844 cri.go:89] found id: "4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:45.992103  263844 cri.go:89] found id: ""
	I1212 00:35:45.992112  263844 logs.go:282] 2 containers: [053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62]
	I1212 00:35:45.992179  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:45.996698  263844 ssh_runner.go:195] Run: which crictl
	I1212 00:35:46.000777  263844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:35:46.000842  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:35:46.029239  263844 cri.go:89] found id: ""
	I1212 00:35:46.029265  263844 logs.go:282] 0 containers: []
	W1212 00:35:46.029274  263844 logs.go:284] No container was found matching "kindnet"
	I1212 00:35:46.029282  263844 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 00:35:46.029339  263844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 00:35:46.058976  263844 cri.go:89] found id: ""
	I1212 00:35:46.058996  263844 logs.go:282] 0 containers: []
	W1212 00:35:46.059004  263844 logs.go:284] No container was found matching "storage-provisioner"
	I1212 00:35:46.059017  263844 logs.go:123] Gathering logs for container status ...
	I1212 00:35:46.059028  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:35:46.088287  263844 logs.go:123] Gathering logs for kubelet ...
	I1212 00:35:46.088310  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:35:46.181165  263844 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:35:46.181190  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:35:46.246764  263844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:35:46.246783  263844 logs.go:123] Gathering logs for kube-apiserver [e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29] ...
	I1212 00:35:46.246805  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e4b3f90019ffe12f1cd0a99111a7fb12d9789bdda7412945e8410d2c06aaec29"
	I1212 00:35:46.284297  263844 logs.go:123] Gathering logs for kube-apiserver [de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db] ...
	I1212 00:35:46.284328  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de54ae009845404eeea2616a16e2df35e538a2702b39ea3dd6926fd5289f55db"
	I1212 00:35:46.327077  263844 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:35:46.327107  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:35:46.403735  263844 logs.go:123] Gathering logs for dmesg ...
	I1212 00:35:46.403765  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:35:46.420563  263844 logs.go:123] Gathering logs for kube-scheduler [5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a] ...
	I1212 00:35:46.420590  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a042a071299c451e721636affc6127684a0b8189f0324cd4fff8e731f7ffa5a"
	I1212 00:35:46.452051  263844 logs.go:123] Gathering logs for kube-controller-manager [053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95] ...
	I1212 00:35:46.452085  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 053cb623f2b2d1606bf18790a998c3a179ad582bb5d8d0e8698b61e15f500a95"
	I1212 00:35:46.480039  263844 logs.go:123] Gathering logs for kube-controller-manager [4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62] ...
	I1212 00:35:46.480067  263844 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d3d83b1dc3e4b1d752c3eedda4a8eacfc4eb9b6043076b1e60a54723b9dcf62"
	I1212 00:35:45.757580  311720 out.go:252]   - Booting up control plane ...
	I1212 00:35:45.757704  311720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:35:45.757812  311720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:35:45.757908  311720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:35:45.774609  311720 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:35:45.774752  311720 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:35:45.785788  311720 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:35:45.786358  311720 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:35:45.786431  311720 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:35:45.905842  311720 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:35:45.905992  311720 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:35:46.907469  311720 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00188174s
	I1212 00:35:46.911863  311720 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:35:46.911996  311720 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1212 00:35:46.912124  311720 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:35:46.912236  311720 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:35:48.605203  311720 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.693140323s
	I1212 00:35:49.019249  311720 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.107220081s
	I1212 00:35:50.913101  311720 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001187365s
	I1212 00:35:50.932646  311720 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:35:50.941952  311720 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:35:50.951746  311720 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:35:50.952037  311720 kubeadm.go:319] [mark-control-plane] Marking the node auto-129742 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:35:50.960171  311720 kubeadm.go:319] [bootstrap-token] Using token: zydenl.4flbnab2vejqitbn
	
	
	==> CRI-O <==
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.12899612Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.132438373Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6fc69b2e-0617-4ed2-be06-0508b37415cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.134569947Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.134986943Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=49fe9bfe-8c4f-48c8-807e-e984d829ad70 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.135382893Z" level=info msg="Ran pod sandbox 463b6bb20138c3101447df0fbe90586d1d1fe39556c07ee2701aaf82dee81f8e with infra container: kube-system/kindnet-j79t9/POD" id=6fc69b2e-0617-4ed2-be06-0508b37415cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.136586327Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ec8b330e-0fde-417d-b9f2-c17ef2f440df name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.136979745Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.137777562Z" level=info msg="Ran pod sandbox 791a2d2f2cfe0da14e8e185791486531b569d08578f8dd6a9996b3a6a5c3a3c0 with infra container: kube-system/kube-proxy-9kt8x/POD" id=49fe9bfe-8c4f-48c8-807e-e984d829ad70 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.13796508Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=dac4d810-6778-4c59-8e22-196e0a966274 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.138777042Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b8c677d3-b341-4301-8d6a-8b73a9395612 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.139163156Z" level=info msg="Creating container: kube-system/kindnet-j79t9/kindnet-cni" id=a0c7abdb-697f-424b-8688-ffe766e7c730 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.139250084Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.139793232Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=892b9d75-a357-4cdb-ab3c-d8c8c3b1aa66 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.140689715Z" level=info msg="Creating container: kube-system/kube-proxy-9kt8x/kube-proxy" id=7b04894c-bf97-44a3-9395-7f0a02244652 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.140802362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.14413845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.144889514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.146883615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.147313219Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.172719934Z" level=info msg="Created container 8d3a33b59ee230c7f70ddb449a74223e5f977edd7041277b7fdacac6441a8d80: kube-system/kindnet-j79t9/kindnet-cni" id=a0c7abdb-697f-424b-8688-ffe766e7c730 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.173509079Z" level=info msg="Starting container: 8d3a33b59ee230c7f70ddb449a74223e5f977edd7041277b7fdacac6441a8d80" id=26fd390f-0532-4dcc-8a03-6d7074e3c193 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.176079878Z" level=info msg="Started container" PID=1053 containerID=8d3a33b59ee230c7f70ddb449a74223e5f977edd7041277b7fdacac6441a8d80 description=kube-system/kindnet-j79t9/kindnet-cni id=26fd390f-0532-4dcc-8a03-6d7074e3c193 name=/runtime.v1.RuntimeService/StartContainer sandboxID=463b6bb20138c3101447df0fbe90586d1d1fe39556c07ee2701aaf82dee81f8e
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.179514137Z" level=info msg="Created container d8c8a20500f28fffb3f41f3a09dd55ab24ce16f7a38d70eeeeb5365bb4623cfc: kube-system/kube-proxy-9kt8x/kube-proxy" id=7b04894c-bf97-44a3-9395-7f0a02244652 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.180240947Z" level=info msg="Starting container: d8c8a20500f28fffb3f41f3a09dd55ab24ce16f7a38d70eeeeb5365bb4623cfc" id=c9b8ac6a-fe51-48f2-9a1b-364534edfbcf name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:35:45 newest-cni-821472 crio[520]: time="2025-12-12T00:35:45.183669336Z" level=info msg="Started container" PID=1054 containerID=d8c8a20500f28fffb3f41f3a09dd55ab24ce16f7a38d70eeeeb5365bb4623cfc description=kube-system/kube-proxy-9kt8x/kube-proxy id=c9b8ac6a-fe51-48f2-9a1b-364534edfbcf name=/runtime.v1.RuntimeService/StartContainer sandboxID=791a2d2f2cfe0da14e8e185791486531b569d08578f8dd6a9996b3a6a5c3a3c0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d8c8a20500f28       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   6 seconds ago       Running             kube-proxy                1                   791a2d2f2cfe0       kube-proxy-9kt8x                            kube-system
	8d3a33b59ee23       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   463b6bb20138c       kindnet-j79t9                               kube-system
	0a27b96d1bb7b       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   8 seconds ago       Running             kube-controller-manager   1                   cd2919eb1b73a       kube-controller-manager-newest-cni-821472   kube-system
	9783b15121a95       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   8 seconds ago       Running             etcd                      1                   cb73983681586       etcd-newest-cni-821472                      kube-system
	174c7ee6c2a1a       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   8 seconds ago       Running             kube-apiserver            1                   659df6c91238b       kube-apiserver-newest-cni-821472            kube-system
	d7a4fba82f0ea       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   8 seconds ago       Running             kube-scheduler            1                   80291f35ffb94       kube-scheduler-newest-cni-821472            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-821472
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-821472
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=newest-cni-821472
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_35_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:35:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-821472
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:35:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:35:44 +0000   Fri, 12 Dec 2025 00:35:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:35:44 +0000   Fri, 12 Dec 2025 00:35:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:35:44 +0000   Fri, 12 Dec 2025 00:35:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 12 Dec 2025 00:35:44 +0000   Fri, 12 Dec 2025 00:35:16 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-821472
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                aee33282-724a-47cd-8807-62e94d0c0413
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-821472                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-j79t9                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-821472             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-821472    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-9kt8x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-821472             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node newest-cni-821472 event: Registered Node newest-cni-821472 in Controller
	  Normal  RegisteredNode  5s    node-controller  Node newest-cni-821472 event: Registered Node newest-cni-821472 in Controller
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [9783b15121a958e88f6034a4a1e6706329e424058eba2e6f8347546407dabcf0] <==
	{"level":"warn","ts":"2025-12-12T00:35:43.975874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:43.984520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:43.991863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:43.998242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.004738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.010903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.017013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.022987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.029331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.046679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.054240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.061668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.068371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.078173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.085003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.092567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.099260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.106700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.112954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.135694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.138769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.145827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.156332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.163044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:35:44.202089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40288","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:35:52 up  1:18,  0 user,  load average: 4.48, 3.38, 2.11
	Linux newest-cni-821472 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8d3a33b59ee230c7f70ddb449a74223e5f977edd7041277b7fdacac6441a8d80] <==
	I1212 00:35:45.320148       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 00:35:45.320369       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1212 00:35:45.410745       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:35:45.410776       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:35:45.410797       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:35:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:35:45.520780       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:35:45.520874       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:35:45.520927       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:35:45.610842       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 00:35:45.821161       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:35:45.821600       1 metrics.go:72] Registering metrics
	I1212 00:35:45.821720       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [174c7ee6c2a1a9165b183524a2e37c57f1d18763ab56511debb798c7745ad211] <==
	I1212 00:35:44.667299       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:44.667348       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 00:35:44.666614       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 00:35:44.666833       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 00:35:44.669366       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 00:35:44.669703       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1212 00:35:44.669897       1 aggregator.go:187] initial CRD sync complete...
	I1212 00:35:44.670237       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 00:35:44.670250       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 00:35:44.670259       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:35:44.676139       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 00:35:44.694042       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:35:44.694348       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:35:44.922921       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:35:44.969862       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 00:35:44.996719       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:35:45.015923       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:35:45.024661       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:35:45.063160       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.67.72"}
	I1212 00:35:45.073057       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.98.97"}
	I1212 00:35:45.571215       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1212 00:35:48.301616       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 00:35:48.351656       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:35:48.403049       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 00:35:48.504213       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0a27b96d1bb7b73ed434ac959df9965a37373d6cec1e595449d9d42feb886a00] <==
	I1212 00:35:47.823813       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.824109       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.824305       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.824438       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.824467       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.824545       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1212 00:35:47.824614       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-821472"
	I1212 00:35:47.824664       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1212 00:35:47.824682       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.824702       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.824856       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.824894       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.825247       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.825309       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.864680       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.878012       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.878095       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.880276       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.880295       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.880876       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.882720       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.917680       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.935102       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:47.935125       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1212 00:35:47.935132       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [d8c8a20500f28fffb3f41f3a09dd55ab24ce16f7a38d70eeeeb5365bb4623cfc] <==
	I1212 00:35:45.223907       1 server_linux.go:53] "Using iptables proxy"
	I1212 00:35:45.278958       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:35:45.379100       1 shared_informer.go:377] "Caches are synced"
	I1212 00:35:45.379138       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1212 00:35:45.379283       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:35:45.396955       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:35:45.397013       1 server_linux.go:136] "Using iptables Proxier"
	I1212 00:35:45.401825       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:35:45.402222       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1212 00:35:45.402242       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:35:45.403541       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:35:45.403569       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:35:45.403619       1 config.go:200] "Starting service config controller"
	I1212 00:35:45.403622       1 config.go:309] "Starting node config controller"
	I1212 00:35:45.403634       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:35:45.403626       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:35:45.403656       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:35:45.403662       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:35:45.504313       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 00:35:45.504347       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 00:35:45.504358       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 00:35:45.504376       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d7a4fba82f0eabc43ef5bf98075ffc80893b0fb5b78287f6ce879a700f528fe8] <==
	I1212 00:35:43.767224       1 serving.go:386] Generated self-signed cert in-memory
	W1212 00:35:44.621193       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 00:35:44.621228       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 00:35:44.621241       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 00:35:44.621250       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 00:35:44.655653       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1212 00:35:44.655702       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:35:44.658401       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:35:44.658433       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:35:44.658571       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 00:35:44.659003       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 00:35:44.759567       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.756740     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-821472"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: E1212 00:35:44.762791     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-821472\" already exists" pod="kube-system/kube-scheduler-newest-cni-821472"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.818221     673 apiserver.go:52] "Watching apiserver"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.824810     673 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.856248     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-821472"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.856622     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-821472"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: E1212 00:35:44.856949     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-821472" containerName="etcd"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: E1212 00:35:44.857630     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-821472" containerName="kube-controller-manager"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: E1212 00:35:44.865633     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-821472\" already exists" pod="kube-system/kube-scheduler-newest-cni-821472"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: E1212 00:35:44.865797     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-821472" containerName="kube-scheduler"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: E1212 00:35:44.866982     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-821472\" already exists" pod="kube-system/kube-apiserver-newest-cni-821472"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: E1212 00:35:44.867064     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-821472" containerName="kube-apiserver"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.920215     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f73abae-7ab2-4110-a5e8-3623cf25bab2-lib-modules\") pod \"kube-proxy-9kt8x\" (UID: \"5f73abae-7ab2-4110-a5e8-3623cf25bab2\") " pod="kube-system/kube-proxy-9kt8x"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.920264     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d76b2dd5-9a77-4340-8bbf-9c37dbb875ed-cni-cfg\") pod \"kindnet-j79t9\" (UID: \"d76b2dd5-9a77-4340-8bbf-9c37dbb875ed\") " pod="kube-system/kindnet-j79t9"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.920286     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d76b2dd5-9a77-4340-8bbf-9c37dbb875ed-lib-modules\") pod \"kindnet-j79t9\" (UID: \"d76b2dd5-9a77-4340-8bbf-9c37dbb875ed\") " pod="kube-system/kindnet-j79t9"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.920372     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d76b2dd5-9a77-4340-8bbf-9c37dbb875ed-xtables-lock\") pod \"kindnet-j79t9\" (UID: \"d76b2dd5-9a77-4340-8bbf-9c37dbb875ed\") " pod="kube-system/kindnet-j79t9"
	Dec 12 00:35:44 newest-cni-821472 kubelet[673]: I1212 00:35:44.920432     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f73abae-7ab2-4110-a5e8-3623cf25bab2-xtables-lock\") pod \"kube-proxy-9kt8x\" (UID: \"5f73abae-7ab2-4110-a5e8-3623cf25bab2\") " pod="kube-system/kube-proxy-9kt8x"
	Dec 12 00:35:45 newest-cni-821472 kubelet[673]: E1212 00:35:45.862603     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-821472" containerName="kube-apiserver"
	Dec 12 00:35:45 newest-cni-821472 kubelet[673]: E1212 00:35:45.862768     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-821472" containerName="etcd"
	Dec 12 00:35:45 newest-cni-821472 kubelet[673]: E1212 00:35:45.863132     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-821472" containerName="kube-scheduler"
	Dec 12 00:35:47 newest-cni-821472 kubelet[673]: E1212 00:35:47.343973     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-821472" containerName="etcd"
	Dec 12 00:35:47 newest-cni-821472 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 00:35:47 newest-cni-821472 kubelet[673]: I1212 00:35:47.367048     673 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 12 00:35:47 newest-cni-821472 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 00:35:47 newest-cni-821472 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-821472 -n newest-cni-821472
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-821472 -n newest-cni-821472: exit status 2 (377.016136ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-821472 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-jh7k7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-wcppg kubernetes-dashboard-b84665fb8-68vlp
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-821472 describe pod coredns-7d764666f9-jh7k7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-wcppg kubernetes-dashboard-b84665fb8-68vlp
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-821472 describe pod coredns-7d764666f9-jh7k7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-wcppg kubernetes-dashboard-b84665fb8-68vlp: exit status 1 (58.905548ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jh7k7" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-wcppg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-68vlp" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-821472 describe pod coredns-7d764666f9-jh7k7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-wcppg kubernetes-dashboard-b84665fb8-68vlp: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-079970 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-079970 --alsologtostderr -v=1: exit status 80 (2.064051944s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-079970 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:37:11.384495  344886 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:37:11.384773  344886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:37:11.384786  344886 out.go:374] Setting ErrFile to fd 2...
	I1212 00:37:11.384792  344886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:37:11.385031  344886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:37:11.385280  344886 out.go:368] Setting JSON to false
	I1212 00:37:11.385297  344886 mustload.go:66] Loading cluster: default-k8s-diff-port-079970
	I1212 00:37:11.385709  344886 config.go:182] Loaded profile config "default-k8s-diff-port-079970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:37:11.386107  344886 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079970 --format={{.State.Status}}
	I1212 00:37:11.408702  344886 host.go:66] Checking if "default-k8s-diff-port-079970" exists ...
	I1212 00:37:11.409019  344886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:37:11.489848  344886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:93 SystemTime:2025-12-12 00:37:11.477153273 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:37:11.491168  344886 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-079970 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1212 00:37:11.493742  344886 out.go:179] * Pausing node default-k8s-diff-port-079970 ... 
	I1212 00:37:11.494690  344886 host.go:66] Checking if "default-k8s-diff-port-079970" exists ...
	I1212 00:37:11.495072  344886 ssh_runner.go:195] Run: systemctl --version
	I1212 00:37:11.495168  344886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079970
	I1212 00:37:11.522785  344886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/default-k8s-diff-port-079970/id_rsa Username:docker}
	I1212 00:37:11.626197  344886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:37:11.641695  344886 pause.go:52] kubelet running: true
	I1212 00:37:11.641757  344886 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:37:11.855236  344886 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:37:11.855336  344886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:37:11.932147  344886 cri.go:89] found id: "666120bd1d8c90f7d6256aa0ec58755862735161bce1ffcf61fe971cc6d38d5c"
	I1212 00:37:11.932172  344886 cri.go:89] found id: "304099090ec96635a8679bb2021aa91454e4d7f75e1409b6f260e1b2ac58b4be"
	I1212 00:37:11.932177  344886 cri.go:89] found id: "6ef4dec1d3a751038100ee6d4e57afdad852b8404aec051e32e0037be43000bd"
	I1212 00:37:11.932183  344886 cri.go:89] found id: "a9a387380ed7bb228e584e659c552c5abaad6b837f63fddb18537419bddd4ad3"
	I1212 00:37:11.932187  344886 cri.go:89] found id: "e419328ac2418f753aacb6859126cefff6ec94cc6fbef558bfa47c49a2581809"
	I1212 00:37:11.932192  344886 cri.go:89] found id: "d3e91eedf153b18ff649b4cbb1342ffb304ec03a16f4b25db886370aa599f5bc"
	I1212 00:37:11.932196  344886 cri.go:89] found id: "089482e7f0285560f4ccccc8335ad1f7791ef2b6f9d70f836c5ec7be2488701b"
	I1212 00:37:11.932201  344886 cri.go:89] found id: "0e056232ea66fdd61191c4da75560d02835b72581f1d1cde64bdc7b6cae2fcf1"
	I1212 00:37:11.932205  344886 cri.go:89] found id: "7f1db19be25d152013d9ea3bf18c932eb1713990f4eee394405c0469cee33812"
	I1212 00:37:11.932215  344886 cri.go:89] found id: "28856be11c5cc482e16607fa0597a6aa67543ac2cdc0834d1bd6d2909c225289"
	I1212 00:37:11.932219  344886 cri.go:89] found id: "7ac4fbd2f6c8431af2b7ef1b0c5d94360ce07676d7c162a642abe8818be8add8"
	I1212 00:37:11.932224  344886 cri.go:89] found id: ""
	I1212 00:37:11.932268  344886 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:37:11.945210  344886 retry.go:31] will retry after 151.030531ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:37:11Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:37:12.096631  344886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:37:12.112694  344886 pause.go:52] kubelet running: false
	I1212 00:37:12.112814  344886 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:37:12.306198  344886 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:37:12.306320  344886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:37:12.397047  344886 cri.go:89] found id: "666120bd1d8c90f7d6256aa0ec58755862735161bce1ffcf61fe971cc6d38d5c"
	I1212 00:37:12.397081  344886 cri.go:89] found id: "304099090ec96635a8679bb2021aa91454e4d7f75e1409b6f260e1b2ac58b4be"
	I1212 00:37:12.397088  344886 cri.go:89] found id: "6ef4dec1d3a751038100ee6d4e57afdad852b8404aec051e32e0037be43000bd"
	I1212 00:37:12.397093  344886 cri.go:89] found id: "a9a387380ed7bb228e584e659c552c5abaad6b837f63fddb18537419bddd4ad3"
	I1212 00:37:12.397098  344886 cri.go:89] found id: "e419328ac2418f753aacb6859126cefff6ec94cc6fbef558bfa47c49a2581809"
	I1212 00:37:12.397103  344886 cri.go:89] found id: "d3e91eedf153b18ff649b4cbb1342ffb304ec03a16f4b25db886370aa599f5bc"
	I1212 00:37:12.397107  344886 cri.go:89] found id: "089482e7f0285560f4ccccc8335ad1f7791ef2b6f9d70f836c5ec7be2488701b"
	I1212 00:37:12.397112  344886 cri.go:89] found id: "0e056232ea66fdd61191c4da75560d02835b72581f1d1cde64bdc7b6cae2fcf1"
	I1212 00:37:12.397117  344886 cri.go:89] found id: "7f1db19be25d152013d9ea3bf18c932eb1713990f4eee394405c0469cee33812"
	I1212 00:37:12.397136  344886 cri.go:89] found id: "28856be11c5cc482e16607fa0597a6aa67543ac2cdc0834d1bd6d2909c225289"
	I1212 00:37:12.397146  344886 cri.go:89] found id: "7ac4fbd2f6c8431af2b7ef1b0c5d94360ce07676d7c162a642abe8818be8add8"
	I1212 00:37:12.397150  344886 cri.go:89] found id: ""
	I1212 00:37:12.397464  344886 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:37:12.413137  344886 retry.go:31] will retry after 479.668791ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:37:12Z" level=error msg="open /run/runc: no such file or directory"
	I1212 00:37:12.893678  344886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:37:12.912394  344886 pause.go:52] kubelet running: false
	I1212 00:37:12.912569  344886 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 00:37:13.129275  344886 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 00:37:13.129369  344886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 00:37:13.230207  344886 cri.go:89] found id: "666120bd1d8c90f7d6256aa0ec58755862735161bce1ffcf61fe971cc6d38d5c"
	I1212 00:37:13.230233  344886 cri.go:89] found id: "304099090ec96635a8679bb2021aa91454e4d7f75e1409b6f260e1b2ac58b4be"
	I1212 00:37:13.230240  344886 cri.go:89] found id: "6ef4dec1d3a751038100ee6d4e57afdad852b8404aec051e32e0037be43000bd"
	I1212 00:37:13.230245  344886 cri.go:89] found id: "a9a387380ed7bb228e584e659c552c5abaad6b837f63fddb18537419bddd4ad3"
	I1212 00:37:13.230250  344886 cri.go:89] found id: "e419328ac2418f753aacb6859126cefff6ec94cc6fbef558bfa47c49a2581809"
	I1212 00:37:13.230262  344886 cri.go:89] found id: "d3e91eedf153b18ff649b4cbb1342ffb304ec03a16f4b25db886370aa599f5bc"
	I1212 00:37:13.230266  344886 cri.go:89] found id: "089482e7f0285560f4ccccc8335ad1f7791ef2b6f9d70f836c5ec7be2488701b"
	I1212 00:37:13.230270  344886 cri.go:89] found id: "0e056232ea66fdd61191c4da75560d02835b72581f1d1cde64bdc7b6cae2fcf1"
	I1212 00:37:13.230275  344886 cri.go:89] found id: "7f1db19be25d152013d9ea3bf18c932eb1713990f4eee394405c0469cee33812"
	I1212 00:37:13.230289  344886 cri.go:89] found id: "28856be11c5cc482e16607fa0597a6aa67543ac2cdc0834d1bd6d2909c225289"
	I1212 00:37:13.230293  344886 cri.go:89] found id: "7ac4fbd2f6c8431af2b7ef1b0c5d94360ce07676d7c162a642abe8818be8add8"
	I1212 00:37:13.230298  344886 cri.go:89] found id: ""
	I1212 00:37:13.230351  344886 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:37:13.359077  344886 out.go:203] 
	W1212 00:37:13.360563  344886 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:37:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:37:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:37:13.360587  344886 out.go:285] * 
	* 
	W1212 00:37:13.369650  344886 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:37:13.377507  344886 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-079970 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-079970
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-079970:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84",
	        "Created": "2025-12-12T00:35:00.648347206Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 323601,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:36:06.06895791Z",
	            "FinishedAt": "2025-12-12T00:36:05.182172786Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84/hostname",
	        "HostsPath": "/var/lib/docker/containers/d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84/hosts",
	        "LogPath": "/var/lib/docker/containers/d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84/d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84-json.log",
	        "Name": "/default-k8s-diff-port-079970",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-079970:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-079970",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84",
	                "LowerDir": "/var/lib/docker/overlay2/f4c9209913a7e2c4752af1f8012a218062093f65de9760f0c38133c56c619028-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f4c9209913a7e2c4752af1f8012a218062093f65de9760f0c38133c56c619028/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f4c9209913a7e2c4752af1f8012a218062093f65de9760f0c38133c56c619028/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f4c9209913a7e2c4752af1f8012a218062093f65de9760f0c38133c56c619028/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-079970",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-079970/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-079970",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-079970",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-079970",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e0513f10991f90a23fe6ee841b1d3e10c76c7458f7a608c6a0239d2b7c6c657f",
	            "SandboxKey": "/var/run/docker/netns/e0513f10991f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-079970": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e9d719bd40fd60bd78307adede76477356bbd0153233e68c6cf65e5ad664376",
	                    "EndpointID": "05e4beef18006825e4121487513f41e6757530d4de1265afcfb1ffc4e0baa05b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "da:b1:b0:cf:84:86",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-079970",
	                        "d079df7029d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079970 -n default-k8s-diff-port-079970
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079970 -n default-k8s-diff-port-079970: exit status 2 (448.168381ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-079970 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-079970 logs -n 25: (1.686565958s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-129742 sudo systemctl cat kubelet --no-pager                         │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo journalctl -xeu kubelet --all --full --no-pager          │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo cat /etc/kubernetes/kubelet.conf                         │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo cat /var/lib/kubelet/config.yaml                         │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo systemctl status docker --all --full --no-pager          │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │                     │
	│ ssh     │ -p kindnet-129742 sudo systemctl cat docker --no-pager                          │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo cat /etc/docker/daemon.json                              │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │                     │
	│ ssh     │ -p kindnet-129742 sudo docker system info                                       │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │                     │
	│ ssh     │ -p kindnet-129742 sudo systemctl status cri-docker --all --full --no-pager      │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │                     │
	│ ssh     │ -p kindnet-129742 sudo systemctl cat cri-docker --no-pager                      │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │                     │
	│ ssh     │ -p kindnet-129742 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo cri-dockerd --version                                    │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo systemctl status containerd --all --full --no-pager      │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │                     │
	│ ssh     │ -p kindnet-129742 sudo systemctl cat containerd --no-pager                      │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo cat /lib/systemd/system/containerd.service               │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo cat /etc/containerd/config.toml                          │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo containerd config dump                                   │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo systemctl status crio --all --full --no-pager            │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo systemctl cat crio --no-pager                            │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo crio config                                              │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ image   │ default-k8s-diff-port-079970 image list --format=json                           │ default-k8s-diff-port-079970 │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ delete  │ -p kindnet-129742                                                               │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-079970 --alsologtostderr -v=1                          │ default-k8s-diff-port-079970 │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:36:55
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:36:55.126951  338059 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:36:55.127204  338059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:36:55.127213  338059 out.go:374] Setting ErrFile to fd 2...
	I1212 00:36:55.127217  338059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:36:55.127438  338059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:36:55.128003  338059 out.go:368] Setting JSON to false
	I1212 00:36:55.129230  338059 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4761,"bootTime":1765495054,"procs":431,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:36:55.129279  338059 start.go:143] virtualization: kvm guest
	I1212 00:36:55.131129  338059 out.go:179] * [custom-flannel-129742] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:36:55.132813  338059 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:36:55.132838  338059 notify.go:221] Checking for updates...
	I1212 00:36:55.135071  338059 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:36:55.136368  338059 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:36:55.137492  338059 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:36:55.138580  338059 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:36:55.139690  338059 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:36:55.141119  338059 config.go:182] Loaded profile config "calico-129742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:36:55.141213  338059 config.go:182] Loaded profile config "default-k8s-diff-port-079970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:36:55.141282  338059 config.go:182] Loaded profile config "kindnet-129742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:36:55.141362  338059 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:36:55.163987  338059 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:36:55.164069  338059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:36:55.221663  338059 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 00:36:55.212323667 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:36:55.221807  338059 docker.go:319] overlay module found
	I1212 00:36:55.223361  338059 out.go:179] * Using the docker driver based on user configuration
	I1212 00:36:55.224411  338059 start.go:309] selected driver: docker
	I1212 00:36:55.224426  338059 start.go:927] validating driver "docker" against <nil>
	I1212 00:36:55.224436  338059 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:36:55.224983  338059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:36:55.286421  338059 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 00:36:55.275700078 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:36:55.286622  338059 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 00:36:55.286925  338059 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:36:55.288577  338059 out.go:179] * Using Docker driver with root privileges
	I1212 00:36:55.289809  338059 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1212 00:36:55.289839  338059 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1212 00:36:55.289909  338059 start.go:353] cluster config:
	{Name:custom-flannel-129742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-129742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:36:55.291168  338059 out.go:179] * Starting "custom-flannel-129742" primary control-plane node in "custom-flannel-129742" cluster
	I1212 00:36:55.292143  338059 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:36:55.293264  338059 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:36:55.294289  338059 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:36:55.294317  338059 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 00:36:55.294334  338059 cache.go:65] Caching tarball of preloaded images
	I1212 00:36:55.294386  338059 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:36:55.294411  338059 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:36:55.294422  338059 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 00:36:55.294549  338059 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/config.json ...
	I1212 00:36:55.294578  338059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/config.json: {Name:mk4eab79b679b8276e7bf99eea4929f8147d6306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:36:55.313962  338059 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:36:55.313982  338059 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:36:55.314000  338059 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:36:55.314030  338059 start.go:360] acquireMachinesLock for custom-flannel-129742: {Name:mke25d1c5f8d769c9ec5480c11024915939caafb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:36:55.314126  338059 start.go:364] duration metric: took 77.692µs to acquireMachinesLock for "custom-flannel-129742"
	I1212 00:36:55.314152  338059 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-129742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-129742 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:36:55.314249  338059 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:36:54.200070  333444 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:36:54.368470  333444 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:36:55.042907  333444 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:36:55.230023  333444 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:36:55.818452  333444 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:36:55.819381  333444 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:36:55.825509  333444 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1212 00:36:51.785608  323348 pod_ready.go:104] pod "coredns-66bc5c9577-jdmpv" is not "Ready", error: <nil>
	W1212 00:36:54.284778  323348 pod_ready.go:104] pod "coredns-66bc5c9577-jdmpv" is not "Ready", error: <nil>
	W1212 00:36:56.285063  323348 pod_ready.go:104] pod "coredns-66bc5c9577-jdmpv" is not "Ready", error: <nil>
	I1212 00:36:56.784631  323348 pod_ready.go:94] pod "coredns-66bc5c9577-jdmpv" is "Ready"
	I1212 00:36:56.784662  323348 pod_ready.go:86] duration metric: took 39.505189583s for pod "coredns-66bc5c9577-jdmpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:36:56.787062  323348 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-079970" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:36:56.791247  323348 pod_ready.go:94] pod "etcd-default-k8s-diff-port-079970" is "Ready"
	I1212 00:36:56.791269  323348 pod_ready.go:86] duration metric: took 4.186014ms for pod "etcd-default-k8s-diff-port-079970" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:36:56.793024  323348 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-079970" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:36:56.796719  323348 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-079970" is "Ready"
	I1212 00:36:56.796744  323348 pod_ready.go:86] duration metric: took 3.69532ms for pod "kube-apiserver-default-k8s-diff-port-079970" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:36:56.798429  323348 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-079970" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:36:56.983519  323348 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-079970" is "Ready"
	I1212 00:36:56.983544  323348 pod_ready.go:86] duration metric: took 185.095674ms for pod "kube-controller-manager-default-k8s-diff-port-079970" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:36:57.184205  323348 pod_ready.go:83] waiting for pod "kube-proxy-dp8fl" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:36:57.584548  323348 pod_ready.go:94] pod "kube-proxy-dp8fl" is "Ready"
	I1212 00:36:57.584576  323348 pod_ready.go:86] duration metric: took 400.345362ms for pod "kube-proxy-dp8fl" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:36:57.783768  323348 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-079970" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:36:58.183853  323348 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-079970" is "Ready"
	I1212 00:36:58.183876  323348 pod_ready.go:86] duration metric: took 400.075789ms for pod "kube-scheduler-default-k8s-diff-port-079970" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:36:58.183887  323348 pod_ready.go:40] duration metric: took 40.908167719s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:36:58.225582  323348 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 00:36:58.480942  323348 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-079970" cluster and "default" namespace by default
	I1212 00:36:55.827112  333444 out.go:252]   - Booting up control plane ...
	I1212 00:36:55.827299  333444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:36:55.827430  333444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:36:55.828585  333444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:36:55.850901  333444 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:36:55.851083  333444 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:36:55.859700  333444 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:36:55.860075  333444 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:36:55.860138  333444 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:36:55.969538  333444 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:36:55.969702  333444 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:36:56.970869  333444 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00162078s
	I1212 00:36:56.973764  333444 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:36:56.973899  333444 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1212 00:36:56.974063  333444 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:36:56.974188  333444 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:36:55.315796  338059 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 00:36:55.316035  338059 start.go:159] libmachine.API.Create for "custom-flannel-129742" (driver="docker")
	I1212 00:36:55.316065  338059 client.go:173] LocalClient.Create starting
	I1212 00:36:55.316162  338059 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem
	I1212 00:36:55.316196  338059 main.go:143] libmachine: Decoding PEM data...
	I1212 00:36:55.316230  338059 main.go:143] libmachine: Parsing certificate...
	I1212 00:36:55.316313  338059 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem
	I1212 00:36:55.316358  338059 main.go:143] libmachine: Decoding PEM data...
	I1212 00:36:55.316379  338059 main.go:143] libmachine: Parsing certificate...
	I1212 00:36:55.316861  338059 cli_runner.go:164] Run: docker network inspect custom-flannel-129742 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:36:55.332820  338059 cli_runner.go:211] docker network inspect custom-flannel-129742 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:36:55.332885  338059 network_create.go:284] running [docker network inspect custom-flannel-129742] to gather additional debugging logs...
	I1212 00:36:55.332904  338059 cli_runner.go:164] Run: docker network inspect custom-flannel-129742
	W1212 00:36:55.348336  338059 cli_runner.go:211] docker network inspect custom-flannel-129742 returned with exit code 1
	I1212 00:36:55.348357  338059 network_create.go:287] error running [docker network inspect custom-flannel-129742]: docker network inspect custom-flannel-129742: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-129742 not found
	I1212 00:36:55.348370  338059 network_create.go:289] output of [docker network inspect custom-flannel-129742]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-129742 not found
	
	** /stderr **
	I1212 00:36:55.348453  338059 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:36:55.365588  338059 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ad72354fdcf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:1e:a7:00:22:62} reservation:<nil>}
	I1212 00:36:55.366434  338059 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-18bf2e8051c8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:8e:8f:a6:9d:0c} reservation:<nil>}
	I1212 00:36:55.367343  338059 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e5251ddf0a35 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:5a:cf:fd:1d:2a} reservation:<nil>}
	I1212 00:36:55.368015  338059 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a80c79a49522 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3e:af:f7:69:8a:ae} reservation:<nil>}
	I1212 00:36:55.368992  338059 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eda3f0}
	I1212 00:36:55.369023  338059 network_create.go:124] attempt to create docker network custom-flannel-129742 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1212 00:36:55.369072  338059 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-129742 custom-flannel-129742
	I1212 00:36:55.416879  338059 network_create.go:108] docker network custom-flannel-129742 192.168.85.0/24 created
	I1212 00:36:55.416906  338059 kic.go:121] calculated static IP "192.168.85.2" for the "custom-flannel-129742" container
	I1212 00:36:55.416961  338059 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:36:55.436587  338059 cli_runner.go:164] Run: docker volume create custom-flannel-129742 --label name.minikube.sigs.k8s.io=custom-flannel-129742 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:36:55.454269  338059 oci.go:103] Successfully created a docker volume custom-flannel-129742
	I1212 00:36:55.454345  338059 cli_runner.go:164] Run: docker run --rm --name custom-flannel-129742-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-129742 --entrypoint /usr/bin/test -v custom-flannel-129742:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 00:36:55.839299  338059 oci.go:107] Successfully prepared a docker volume custom-flannel-129742
	I1212 00:36:55.839363  338059 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:36:55.839379  338059 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:36:55.839440  338059 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-129742:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:36:58.996185  338059 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-129742:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.156689072s)
	I1212 00:36:58.996228  338059 kic.go:203] duration metric: took 3.156845325s to extract preloaded images to volume ...
	W1212 00:36:58.996324  338059 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 00:36:58.996386  338059 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 00:36:58.996451  338059 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:36:59.077763  338059 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-129742 --name custom-flannel-129742 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-129742 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-129742 --network custom-flannel-129742 --ip 192.168.85.2 --volume custom-flannel-129742:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 00:36:59.417275  338059 cli_runner.go:164] Run: docker container inspect custom-flannel-129742 --format={{.State.Running}}
	I1212 00:36:59.446945  338059 cli_runner.go:164] Run: docker container inspect custom-flannel-129742 --format={{.State.Status}}
	I1212 00:36:59.472300  338059 cli_runner.go:164] Run: docker exec custom-flannel-129742 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:36:59.526228  338059 oci.go:144] the created container "custom-flannel-129742" has a running status.
	I1212 00:36:59.526256  338059 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/custom-flannel-129742/id_rsa...
	I1212 00:36:59.711137  338059 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-10975/.minikube/machines/custom-flannel-129742/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:36:59.745866  338059 cli_runner.go:164] Run: docker container inspect custom-flannel-129742 --format={{.State.Status}}
	I1212 00:36:59.768678  338059 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:36:59.768703  338059 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-129742 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:36:59.826489  338059 cli_runner.go:164] Run: docker container inspect custom-flannel-129742 --format={{.State.Status}}
	I1212 00:36:59.847748  338059 machine.go:94] provisionDockerMachine start ...
	I1212 00:36:59.847827  338059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-129742
	I1212 00:36:59.869395  338059 main.go:143] libmachine: Using SSH client type: native
	I1212 00:36:59.869961  338059 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1212 00:36:59.870001  338059 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:36:59.870970  338059 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32880->127.0.0.1:33123: read: connection reset by peer
	I1212 00:37:00.003649  333444 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.02974581s
	I1212 00:37:01.110841  333444 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.137039827s
	I1212 00:37:02.975970  333444 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002164797s
	I1212 00:37:02.993857  333444 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:37:03.004355  333444 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:37:03.015314  333444 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:37:03.015597  333444 kubeadm.go:319] [mark-control-plane] Marking the node calico-129742 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:37:03.029042  333444 kubeadm.go:319] [bootstrap-token] Using token: 5kahw6.6uga3r66p5ru970s
	I1212 00:37:03.030269  333444 out.go:252]   - Configuring RBAC rules ...
	I1212 00:37:03.030414  333444 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:37:03.034143  333444 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:37:03.040504  333444 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:37:03.042991  333444 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:37:03.045538  333444 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:37:03.047939  333444 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:37:03.382284  333444 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:37:03.798564  333444 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 00:37:04.381941  333444 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 00:37:04.383206  333444 kubeadm.go:319] 
	I1212 00:37:04.383297  333444 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 00:37:04.383306  333444 kubeadm.go:319] 
	I1212 00:37:04.383402  333444 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 00:37:04.383416  333444 kubeadm.go:319] 
	I1212 00:37:04.383445  333444 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 00:37:04.383561  333444 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:37:04.383662  333444 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:37:04.383682  333444 kubeadm.go:319] 
	I1212 00:37:04.383758  333444 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 00:37:04.383776  333444 kubeadm.go:319] 
	I1212 00:37:04.383842  333444 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:37:04.383850  333444 kubeadm.go:319] 
	I1212 00:37:04.383920  333444 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 00:37:04.384036  333444 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:37:04.384094  333444 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:37:04.384100  333444 kubeadm.go:319] 
	I1212 00:37:04.384174  333444 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:37:04.384281  333444 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 00:37:04.384290  333444 kubeadm.go:319] 
	I1212 00:37:04.384397  333444 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5kahw6.6uga3r66p5ru970s \
	I1212 00:37:04.384575  333444 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f \
	I1212 00:37:04.384609  333444 kubeadm.go:319] 	--control-plane 
	I1212 00:37:04.384623  333444 kubeadm.go:319] 
	I1212 00:37:04.384728  333444 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:37:04.384736  333444 kubeadm.go:319] 
	I1212 00:37:04.384829  333444 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5kahw6.6uga3r66p5ru970s \
	I1212 00:37:04.384925  333444 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:19fbb4ff5433ca1df50801c5cac6b194d490b33a4066cc5884080af231dcc75f 
	I1212 00:37:04.388275  333444 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 00:37:04.388445  333444 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:37:04.388492  333444 cni.go:84] Creating CNI manager for "calico"
	I1212 00:37:04.390073  333444 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1212 00:37:03.010754  338059 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-129742
	
	I1212 00:37:03.010786  338059 ubuntu.go:182] provisioning hostname "custom-flannel-129742"
	I1212 00:37:03.010854  338059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-129742
	I1212 00:37:03.031663  338059 main.go:143] libmachine: Using SSH client type: native
	I1212 00:37:03.031876  338059 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1212 00:37:03.031889  338059 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-129742 && echo "custom-flannel-129742" | sudo tee /etc/hostname
	I1212 00:37:03.182221  338059 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-129742
	
	I1212 00:37:03.182314  338059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-129742
	I1212 00:37:03.202404  338059 main.go:143] libmachine: Using SSH client type: native
	I1212 00:37:03.202657  338059 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1212 00:37:03.202675  338059 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-129742' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-129742/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-129742' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:37:03.338310  338059 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:37:03.338349  338059 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-10975/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-10975/.minikube}
	I1212 00:37:03.338416  338059 ubuntu.go:190] setting up certificates
	I1212 00:37:03.338442  338059 provision.go:84] configureAuth start
	I1212 00:37:03.338533  338059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-129742
	I1212 00:37:03.359901  338059 provision.go:143] copyHostCerts
	I1212 00:37:03.359981  338059 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem, removing ...
	I1212 00:37:03.359994  338059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem
	I1212 00:37:03.360077  338059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/cert.pem (1123 bytes)
	I1212 00:37:03.360214  338059 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem, removing ...
	I1212 00:37:03.360229  338059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem
	I1212 00:37:03.360277  338059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/key.pem (1679 bytes)
	I1212 00:37:03.360380  338059 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem, removing ...
	I1212 00:37:03.360390  338059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem
	I1212 00:37:03.360428  338059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-10975/.minikube/ca.pem (1082 bytes)
	I1212 00:37:03.360557  338059 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-129742 san=[127.0.0.1 192.168.85.2 custom-flannel-129742 localhost minikube]
	I1212 00:37:03.493885  338059 provision.go:177] copyRemoteCerts
	I1212 00:37:03.493962  338059 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:37:03.494011  338059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-129742
	I1212 00:37:03.514628  338059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/custom-flannel-129742/id_rsa Username:docker}
	I1212 00:37:03.616754  338059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:37:03.648733  338059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 00:37:03.673561  338059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:37:03.694154  338059 provision.go:87] duration metric: took 355.69416ms to configureAuth
	I1212 00:37:03.694191  338059 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:37:03.694348  338059 config.go:182] Loaded profile config "custom-flannel-129742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:37:03.694450  338059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-129742
	I1212 00:37:03.712391  338059 main.go:143] libmachine: Using SSH client type: native
	I1212 00:37:03.712698  338059 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1212 00:37:03.712724  338059 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:37:03.991609  338059 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:37:03.991648  338059 machine.go:97] duration metric: took 4.143879232s to provisionDockerMachine
	I1212 00:37:03.991660  338059 client.go:176] duration metric: took 8.675584973s to LocalClient.Create
	I1212 00:37:03.991682  338059 start.go:167] duration metric: took 8.675646726s to libmachine.API.Create "custom-flannel-129742"
	I1212 00:37:03.991695  338059 start.go:293] postStartSetup for "custom-flannel-129742" (driver="docker")
	I1212 00:37:03.991706  338059 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:37:03.991759  338059 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:37:03.991806  338059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-129742
	I1212 00:37:04.012103  338059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/custom-flannel-129742/id_rsa Username:docker}
	I1212 00:37:04.113963  338059 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:37:04.118150  338059 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:37:04.118183  338059 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:37:04.118196  338059 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/addons for local assets ...
	I1212 00:37:04.118254  338059 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-10975/.minikube/files for local assets ...
	I1212 00:37:04.118378  338059 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem -> 145032.pem in /etc/ssl/certs
	I1212 00:37:04.118610  338059 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:37:04.128226  338059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:37:04.148601  338059 start.go:296] duration metric: took 156.894193ms for postStartSetup
	I1212 00:37:04.148969  338059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-129742
	I1212 00:37:04.165888  338059 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/config.json ...
	I1212 00:37:04.166182  338059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:37:04.166244  338059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-129742
	I1212 00:37:04.183667  338059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/custom-flannel-129742/id_rsa Username:docker}
	I1212 00:37:04.276511  338059 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:37:04.281188  338059 start.go:128] duration metric: took 8.96692463s to createHost
	I1212 00:37:04.281212  338059 start.go:83] releasing machines lock for "custom-flannel-129742", held for 8.967071595s
	I1212 00:37:04.281270  338059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-129742
	I1212 00:37:04.301701  338059 ssh_runner.go:195] Run: cat /version.json
	I1212 00:37:04.301753  338059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-129742
	I1212 00:37:04.301773  338059 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:37:04.301839  338059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-129742
	I1212 00:37:04.320996  338059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/custom-flannel-129742/id_rsa Username:docker}
	I1212 00:37:04.321378  338059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/custom-flannel-129742/id_rsa Username:docker}
	I1212 00:37:04.415836  338059 ssh_runner.go:195] Run: systemctl --version
	I1212 00:37:04.482937  338059 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:37:04.520810  338059 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:37:04.525700  338059 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:37:04.525772  338059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:37:04.556288  338059 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:37:04.556312  338059 start.go:496] detecting cgroup driver to use...
	I1212 00:37:04.556343  338059 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 00:37:04.556397  338059 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:37:04.573047  338059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:37:04.585821  338059 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:37:04.585892  338059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:37:04.606276  338059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:37:04.627141  338059 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:37:04.746442  338059 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:37:04.869104  338059 docker.go:234] disabling docker service ...
	I1212 00:37:04.869198  338059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:37:04.893640  338059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:37:04.909110  338059 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:37:05.051984  338059 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:37:05.200154  338059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:37:05.216992  338059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:37:05.236453  338059 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:37:05.236531  338059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:37:05.251464  338059 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 00:37:05.251601  338059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:37:05.264407  338059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:37:05.274530  338059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:37:05.285877  338059 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:37:05.296344  338059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:37:05.307466  338059 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:37:05.324395  338059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:37:05.334119  338059 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:37:05.342889  338059 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:37:05.350855  338059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:37:05.456879  338059 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:37:05.608103  338059 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:37:05.608164  338059 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:37:05.612402  338059 start.go:564] Will wait 60s for crictl version
	I1212 00:37:05.612467  338059 ssh_runner.go:195] Run: which crictl
	I1212 00:37:05.616036  338059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:37:05.642351  338059 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:37:05.642432  338059 ssh_runner.go:195] Run: crio --version
	I1212 00:37:05.672363  338059 ssh_runner.go:195] Run: crio --version
	I1212 00:37:05.704470  338059 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:37:04.391436  333444 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 00:37:04.391462  333444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1212 00:37:04.407405  333444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:37:05.292602  333444 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:37:05.292678  333444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:37:05.292761  333444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-129742 minikube.k8s.io/updated_at=2025_12_12T00_37_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=calico-129742 minikube.k8s.io/primary=true
	I1212 00:37:05.305767  333444 ops.go:34] apiserver oom_adj: -16
	I1212 00:37:05.372442  333444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:37:05.872593  333444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:37:06.373537  333444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:37:06.873317  333444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:37:07.372572  333444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:37:07.872577  333444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:37:08.373238  333444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:37:08.441964  333444 kubeadm.go:1114] duration metric: took 3.149342824s to wait for elevateKubeSystemPrivileges
	I1212 00:37:08.442022  333444 kubeadm.go:403] duration metric: took 17.097756501s to StartCluster
	I1212 00:37:08.442043  333444 settings.go:142] acquiring lock: {Name:mk5afd7bc478f09725bd1765f0e57b39ef62ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:37:08.442114  333444 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:37:08.443803  333444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/kubeconfig: {Name:mk7a0488362e149c5640e957b6167592452fe754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:37:08.444076  333444 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:37:08.444084  333444 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:37:08.444140  333444 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:37:08.444249  333444 addons.go:70] Setting storage-provisioner=true in profile "calico-129742"
	I1212 00:37:08.444255  333444 addons.go:70] Setting default-storageclass=true in profile "calico-129742"
	I1212 00:37:08.444269  333444 addons.go:239] Setting addon storage-provisioner=true in "calico-129742"
	I1212 00:37:08.444271  333444 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-129742"
	I1212 00:37:08.444273  333444 config.go:182] Loaded profile config "calico-129742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:37:08.444299  333444 host.go:66] Checking if "calico-129742" exists ...
	I1212 00:37:08.444685  333444 cli_runner.go:164] Run: docker container inspect calico-129742 --format={{.State.Status}}
	I1212 00:37:08.444892  333444 cli_runner.go:164] Run: docker container inspect calico-129742 --format={{.State.Status}}
	I1212 00:37:08.445781  333444 out.go:179] * Verifying Kubernetes components...
	I1212 00:37:08.447106  333444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:37:08.472559  333444 addons.go:239] Setting addon default-storageclass=true in "calico-129742"
	I1212 00:37:08.472600  333444 host.go:66] Checking if "calico-129742" exists ...
	I1212 00:37:08.472994  333444 cli_runner.go:164] Run: docker container inspect calico-129742 --format={{.State.Status}}
	I1212 00:37:08.475569  333444 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:37:08.477363  333444 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:37:08.477382  333444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:37:08.477433  333444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-129742
	I1212 00:37:08.503088  333444 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:37:08.503185  333444 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:37:08.503282  333444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-129742
	I1212 00:37:08.506279  333444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/calico-129742/id_rsa Username:docker}
	I1212 00:37:08.526889  333444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/calico-129742/id_rsa Username:docker}
	I1212 00:37:08.563005  333444 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:37:08.614204  333444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:37:08.642363  333444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:37:08.652084  333444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:37:08.774134  333444 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1212 00:37:08.775381  333444 node_ready.go:35] waiting up to 15m0s for node "calico-129742" to be "Ready" ...
	I1212 00:37:09.005381  333444 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 00:37:09.006394  333444 addons.go:530] duration metric: took 562.249612ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 00:37:05.705596  338059 cli_runner.go:164] Run: docker network inspect custom-flannel-129742 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:37:05.724561  338059 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 00:37:05.728738  338059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:37:05.739387  338059 kubeadm.go:884] updating cluster {Name:custom-flannel-129742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-129742 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:37:05.739495  338059 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:37:05.739543  338059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:37:05.774042  338059 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:37:05.774064  338059 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:37:05.774109  338059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:37:05.802321  338059 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:37:05.802343  338059 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:37:05.802350  338059 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1212 00:37:05.802438  338059 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-129742 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-129742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1212 00:37:05.802532  338059 ssh_runner.go:195] Run: crio config
	I1212 00:37:05.857464  338059 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1212 00:37:05.857522  338059 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:37:05.857551  338059 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-129742 NodeName:custom-flannel-129742 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:37:05.857728  338059 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-129742"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:37:05.857796  338059 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:37:05.866925  338059 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:37:05.866988  338059 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:37:05.875521  338059 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1212 00:37:05.889910  338059 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:37:05.906226  338059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1212 00:37:05.920092  338059 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:37:05.923920  338059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:37:05.934843  338059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:37:06.023294  338059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:37:06.046765  338059 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742 for IP: 192.168.85.2
	I1212 00:37:06.046781  338059 certs.go:195] generating shared ca certs ...
	I1212 00:37:06.046801  338059 certs.go:227] acquiring lock for ca certs: {Name:mk07c773c93bef21361d1ca82ce0dfed557d7e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:37:06.046973  338059 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key
	I1212 00:37:06.047025  338059 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key
	I1212 00:37:06.047039  338059 certs.go:257] generating profile certs ...
	I1212 00:37:06.047102  338059 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/client.key
	I1212 00:37:06.047129  338059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/client.crt with IP's: []
	I1212 00:37:06.210656  338059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/client.crt ...
	I1212 00:37:06.210680  338059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/client.crt: {Name:mkf546ac8fc7f8d6cc7c873f90f736f4ab7e32be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:37:06.210850  338059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/client.key ...
	I1212 00:37:06.210869  338059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/client.key: {Name:mk2f0b50200ba3b83c39329e99cae13d3b8d5a7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:37:06.210979  338059 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/apiserver.key.68894602
	I1212 00:37:06.210996  338059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/apiserver.crt.68894602 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1212 00:37:06.320354  338059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/apiserver.crt.68894602 ...
	I1212 00:37:06.320374  338059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/apiserver.crt.68894602: {Name:mk0438d3904809e094a23155e12acf01fbd4d159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:37:06.320547  338059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/apiserver.key.68894602 ...
	I1212 00:37:06.320566  338059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/apiserver.key.68894602: {Name:mk4ed04be6870932147401314f0f4fb461b23319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:37:06.320671  338059 certs.go:382] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/apiserver.crt.68894602 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/apiserver.crt
	I1212 00:37:06.320748  338059 certs.go:386] copying /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/apiserver.key.68894602 -> /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/apiserver.key
	I1212 00:37:06.320805  338059 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/proxy-client.key
	I1212 00:37:06.320820  338059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/proxy-client.crt with IP's: []
	I1212 00:37:06.429534  338059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/proxy-client.crt ...
	I1212 00:37:06.429566  338059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/proxy-client.crt: {Name:mkf4f84f9b5c116ff2200f53971aaf35d089f09f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:37:06.429765  338059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/proxy-client.key ...
	I1212 00:37:06.429789  338059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/proxy-client.key: {Name:mkfcada56f5346fc35b3adf05da49e62057ad5a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:37:06.430017  338059 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem (1338 bytes)
	W1212 00:37:06.430069  338059 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503_empty.pem, impossibly tiny 0 bytes
	I1212 00:37:06.430081  338059 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:37:06.430115  338059 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:37:06.430157  338059 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:37:06.430191  338059 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/certs/key.pem (1679 bytes)
	I1212 00:37:06.430249  338059 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem (1708 bytes)
	I1212 00:37:06.431007  338059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:37:06.451617  338059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:37:06.469917  338059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:37:06.487685  338059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 00:37:06.506160  338059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1212 00:37:06.523373  338059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:37:06.543412  338059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:37:06.562279  338059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/custom-flannel-129742/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:37:06.580485  338059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/certs/14503.pem --> /usr/share/ca-certificates/14503.pem (1338 bytes)
	I1212 00:37:06.600113  338059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/ssl/certs/145032.pem --> /usr/share/ca-certificates/145032.pem (1708 bytes)
	I1212 00:37:06.618264  338059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:37:06.635837  338059 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:37:06.648948  338059 ssh_runner.go:195] Run: openssl version
	I1212 00:37:06.655699  338059 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14503.pem
	I1212 00:37:06.663640  338059 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14503.pem /etc/ssl/certs/14503.pem
	I1212 00:37:06.671038  338059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14503.pem
	I1212 00:37:06.675277  338059 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:04 /usr/share/ca-certificates/14503.pem
	I1212 00:37:06.675327  338059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14503.pem
	I1212 00:37:06.719809  338059 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:37:06.729624  338059 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14503.pem /etc/ssl/certs/51391683.0
	I1212 00:37:06.738749  338059 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145032.pem
	I1212 00:37:06.748039  338059 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145032.pem /etc/ssl/certs/145032.pem
	I1212 00:37:06.757631  338059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145032.pem
	I1212 00:37:06.761723  338059 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:04 /usr/share/ca-certificates/145032.pem
	I1212 00:37:06.761789  338059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145032.pem
	I1212 00:37:06.800465  338059 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:37:06.808994  338059 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145032.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:37:06.816268  338059 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:37:06.825488  338059 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:37:06.833824  338059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:37:06.838186  338059 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:37:06.838239  338059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:37:06.876773  338059 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:37:06.884205  338059 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:37:06.891825  338059 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:37:06.895389  338059 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:37:06.895446  338059 kubeadm.go:401] StartCluster: {Name:custom-flannel-129742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-129742 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:37:06.895562  338059 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:37:06.895613  338059 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:37:06.927264  338059 cri.go:89] found id: ""
	I1212 00:37:06.927336  338059 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:37:06.937389  338059 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:37:06.946636  338059 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:37:06.946692  338059 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:37:06.954907  338059 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:37:06.954925  338059 kubeadm.go:158] found existing configuration files:
	
	I1212 00:37:06.954976  338059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:37:06.964094  338059 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:37:06.964142  338059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:37:06.973091  338059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:37:06.982470  338059 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:37:06.982555  338059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:37:06.990680  338059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:37:06.998140  338059 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:37:06.998188  338059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:37:07.006123  338059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:37:07.013652  338059 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:37:07.013696  338059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:37:07.021936  338059 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:37:07.086209  338059 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 00:37:07.150205  338059 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Dec 12 00:36:39 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:39.552468401Z" level=info msg="Started container" PID=1744 containerID=a3120bd02e313b946c0bf10f63c761c88d29e95451fecbc16013be1d819f3ee3 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz/dashboard-metrics-scraper id=fed36fad-0aff-42ed-bb38-dbf399c485f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8fccf844463d7e6b74ca63826e6f3d7bd560533167c790e764153ece4ef3eaa
	Dec 12 00:36:39 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:39.621367985Z" level=info msg="Removing container: 3415f9618d3add40da85ce2e707d93bac239fa62cd640335d28034c55f66df52" id=798adbf1-5d77-4bbb-8601-ee997fab5f8c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:36:39 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:39.633246626Z" level=info msg="Removed container 3415f9618d3add40da85ce2e707d93bac239fa62cd640335d28034c55f66df52: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz/dashboard-metrics-scraper" id=798adbf1-5d77-4bbb-8601-ee997fab5f8c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.643792825Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ff0276f3-5bab-49a3-92b9-5df461875322 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.644895165Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=28ed8d79-6fa3-42ca-a68f-79cf3cbb314c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.646131472Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ce386fba-6495-489f-a8fd-65c30869153f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.646531333Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.651919766Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.652103577Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3e685fe0ecdd31b9103baf81ea93e3446a035e7dd63edbfc347ea06b82fa6787/merged/etc/passwd: no such file or directory"
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.652128597Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3e685fe0ecdd31b9103baf81ea93e3446a035e7dd63edbfc347ea06b82fa6787/merged/etc/group: no such file or directory"
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.652418143Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.686705118Z" level=info msg="Created container 666120bd1d8c90f7d6256aa0ec58755862735161bce1ffcf61fe971cc6d38d5c: kube-system/storage-provisioner/storage-provisioner" id=ce386fba-6495-489f-a8fd-65c30869153f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.687279421Z" level=info msg="Starting container: 666120bd1d8c90f7d6256aa0ec58755862735161bce1ffcf61fe971cc6d38d5c" id=80862c54-10f4-40a0-ac0f-6fb7624d52b6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.689206719Z" level=info msg="Started container" PID=1758 containerID=666120bd1d8c90f7d6256aa0ec58755862735161bce1ffcf61fe971cc6d38d5c description=kube-system/storage-provisioner/storage-provisioner id=80862c54-10f4-40a0-ac0f-6fb7624d52b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b34394bb51462fd48a96ce2324295a78423da44d8f753581b8fc33030c568e61
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.501799249Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3b506c53-96b9-4e3c-9a3e-4018755b8521 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.502775713Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=398b0828-618f-4c9b-8d07-7abe2c8fe1a0 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.503954328Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz/dashboard-metrics-scraper" id=06891259-e2d3-491b-bcf6-ef5c510b8a31 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.504088122Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.51073995Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.511289396Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.541687771Z" level=info msg="Created container 28856be11c5cc482e16607fa0597a6aa67543ac2cdc0834d1bd6d2909c225289: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz/dashboard-metrics-scraper" id=06891259-e2d3-491b-bcf6-ef5c510b8a31 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.542299881Z" level=info msg="Starting container: 28856be11c5cc482e16607fa0597a6aa67543ac2cdc0834d1bd6d2909c225289" id=a36911a0-cc46-4954-8b25-ff4bec9757c9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.544517868Z" level=info msg="Started container" PID=1794 containerID=28856be11c5cc482e16607fa0597a6aa67543ac2cdc0834d1bd6d2909c225289 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz/dashboard-metrics-scraper id=a36911a0-cc46-4954-8b25-ff4bec9757c9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8fccf844463d7e6b74ca63826e6f3d7bd560533167c790e764153ece4ef3eaa
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.697867553Z" level=info msg="Removing container: a3120bd02e313b946c0bf10f63c761c88d29e95451fecbc16013be1d819f3ee3" id=89fb73c7-8c52-45bd-a214-30ee169a944b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.708245144Z" level=info msg="Removed container a3120bd02e313b946c0bf10f63c761c88d29e95451fecbc16013be1d819f3ee3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz/dashboard-metrics-scraper" id=89fb73c7-8c52-45bd-a214-30ee169a944b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	28856be11c5cc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago        Exited              dashboard-metrics-scraper   3                   e8fccf844463d       dashboard-metrics-scraper-6ffb444bf9-crjgz             kubernetes-dashboard
	666120bd1d8c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   b34394bb51462       storage-provisioner                                    kube-system
	7ac4fbd2f6c84       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago       Running             kubernetes-dashboard        0                   4f9b9d4f610cb       kubernetes-dashboard-855c9754f9-262gm                  kubernetes-dashboard
	304099090ec96       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           57 seconds ago       Running             coredns                     0                   07fb9ab25ca85       coredns-66bc5c9577-jdmpv                               kube-system
	69a084da2b60b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   846f024726f02       busybox                                                default
	6ef4dec1d3a75       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   b34394bb51462       storage-provisioner                                    kube-system
	a9a387380ed7b       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           57 seconds ago       Running             kube-proxy                  0                   dda4291f49f73       kube-proxy-dp8fl                                       kube-system
	e419328ac2418       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   c525e9e9c652b       kindnet-g8hsv                                          kube-system
	d3e91eedf153b       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           About a minute ago   Running             kube-controller-manager     0                   5d00a63b60416       kube-controller-manager-default-k8s-diff-port-079970   kube-system
	089482e7f0285       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           About a minute ago   Running             kube-apiserver              0                   4e7974467f528       kube-apiserver-default-k8s-diff-port-079970            kube-system
	0e056232ea66f       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           About a minute ago   Running             kube-scheduler              0                   1143a339bcf9e       kube-scheduler-default-k8s-diff-port-079970            kube-system
	7f1db19be25d1       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           About a minute ago   Running             etcd                        0                   48337e5b28e82       etcd-default-k8s-diff-port-079970                      kube-system
	
	
	==> coredns [304099090ec96635a8679bb2021aa91454e4d7f75e1409b6f260e1b2ac58b4be] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58381 - 41838 "HINFO IN 7295719463501735635.1848477618576580803. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.476393395s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-079970
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-079970
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=default-k8s-diff-port-079970
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_35_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:35:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-079970
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:37:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:36:56 +0000   Fri, 12 Dec 2025 00:35:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:36:56 +0000   Fri, 12 Dec 2025 00:35:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:36:56 +0000   Fri, 12 Dec 2025 00:35:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:36:56 +0000   Fri, 12 Dec 2025 00:35:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-079970
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                2f6ae672-3816-4aaa-aade-b1dfd5ff98c4
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-jdmpv                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-default-k8s-diff-port-079970                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-g8hsv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-default-k8s-diff-port-079970             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-079970    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-dp8fl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-default-k8s-diff-port-079970             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-crjgz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-262gm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 111s                 kube-proxy       
	  Normal  Starting                 57s                  kube-proxy       
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x8 over 2m2s)  kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    116s                 kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s                 kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  116s                 kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           112s                 node-controller  Node default-k8s-diff-port-079970 event: Registered Node default-k8s-diff-port-079970 in Controller
	  Normal  NodeReady                101s                 kubelet          Node default-k8s-diff-port-079970 status is now: NodeReady
	  Normal  Starting                 62s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)    kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)    kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)    kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                  node-controller  Node default-k8s-diff-port-079970 event: Registered Node default-k8s-diff-port-079970 in Controller
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [7f1db19be25d152013d9ea3bf18c932eb1713990f4eee394405c0469cee33812] <==
	{"level":"warn","ts":"2025-12-12T00:36:14.932209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:14.948440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:14.958145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:14.966705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:14.978723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:14.986396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:14.993703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.000971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.008692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.016303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.023173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.030234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.037023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.044738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.051694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.059752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.068387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.075779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.084442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.091814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.108011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.125202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.134572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.145062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.199685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58358","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:37:14 up  1:19,  0 user,  load average: 5.33, 3.93, 2.41
	Linux default-k8s-diff-port-079970 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e419328ac2418f753aacb6859126cefff6ec94cc6fbef558bfa47c49a2581809] <==
	I1212 00:36:17.049275       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 00:36:17.049530       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1212 00:36:17.049654       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:36:17.049669       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:36:17.049692       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:36:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:36:17.250331       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:36:17.250547       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:36:17.250565       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:36:17.267339       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 00:36:17.546269       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:36:17.546298       1 metrics.go:72] Registering metrics
	I1212 00:36:17.546589       1 controller.go:711] "Syncing nftables rules"
	I1212 00:36:27.250673       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:36:27.250747       1 main.go:301] handling current node
	I1212 00:36:37.256645       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:36:37.256682       1 main.go:301] handling current node
	I1212 00:36:47.250548       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:36:47.250588       1 main.go:301] handling current node
	I1212 00:36:57.250507       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:36:57.250544       1 main.go:301] handling current node
	I1212 00:37:07.250668       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:37:07.250707       1 main.go:301] handling current node
	
	
	==> kube-apiserver [089482e7f0285560f4ccccc8335ad1f7791ef2b6f9d70f836c5ec7be2488701b] <==
	I1212 00:36:15.770602       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1212 00:36:15.770582       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 00:36:15.771076       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1212 00:36:15.771213       1 aggregator.go:171] initial CRD sync complete...
	I1212 00:36:15.771230       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 00:36:15.771238       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 00:36:15.771246       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:36:15.771215       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1212 00:36:15.774958       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 00:36:15.790779       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 00:36:15.799908       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:36:15.811863       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1212 00:36:16.026516       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 00:36:16.054417       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:36:16.070145       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:36:16.076613       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:36:16.081910       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:36:16.110379       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.169.190"}
	I1212 00:36:16.119516       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.126.99"}
	I1212 00:36:16.667206       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 00:36:19.203049       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 00:36:19.503716       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:36:19.503716       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:36:19.552087       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 00:36:19.552087       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d3e91eedf153b18ff649b4cbb1342ffb304ec03a16f4b25db886370aa599f5bc] <==
	I1212 00:36:19.083265       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1212 00:36:19.086499       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 00:36:19.087811       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1212 00:36:19.098813       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 00:36:19.098836       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 00:36:19.098865       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1212 00:36:19.099866       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 00:36:19.099892       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1212 00:36:19.099924       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 00:36:19.099935       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 00:36:19.099954       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 00:36:19.100039       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 00:36:19.099930       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 00:36:19.100102       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-079970"
	I1212 00:36:19.100143       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1212 00:36:19.100203       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1212 00:36:19.101312       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 00:36:19.101332       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1212 00:36:19.105405       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 00:36:19.107539       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 00:36:19.107554       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 00:36:19.107563       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 00:36:19.109658       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1212 00:36:19.111012       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1212 00:36:19.125636       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [a9a387380ed7bb228e584e659c552c5abaad6b837f63fddb18537419bddd4ad3] <==
	I1212 00:36:16.913138       1 server_linux.go:53] "Using iptables proxy"
	I1212 00:36:16.991938       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 00:36:17.092274       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 00:36:17.092337       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1212 00:36:17.092437       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:36:17.109778       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:36:17.109823       1 server_linux.go:132] "Using iptables Proxier"
	I1212 00:36:17.114655       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:36:17.114999       1 server.go:527] "Version info" version="v1.34.2"
	I1212 00:36:17.115029       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:36:17.116122       1 config.go:309] "Starting node config controller"
	I1212 00:36:17.116142       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:36:17.116151       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 00:36:17.116196       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:36:17.116211       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:36:17.116307       1 config.go:200] "Starting service config controller"
	I1212 00:36:17.116322       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:36:17.116352       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:36:17.116361       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:36:17.217198       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 00:36:17.217235       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 00:36:17.217235       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [0e056232ea66fdd61191c4da75560d02835b72581f1d1cde64bdc7b6cae2fcf1] <==
	I1212 00:36:14.378630       1 serving.go:386] Generated self-signed cert in-memory
	W1212 00:36:15.696546       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 00:36:15.696755       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1212 00:36:15.696827       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 00:36:15.696838       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 00:36:15.727407       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1212 00:36:15.727447       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:36:15.732007       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:36:15.732058       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:36:15.733067       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 00:36:15.733545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 00:36:15.833165       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 00:36:23 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:23.577595     718 scope.go:117] "RemoveContainer" containerID="3415f9618d3add40da85ce2e707d93bac239fa62cd640335d28034c55f66df52"
	Dec 12 00:36:23 default-k8s-diff-port-079970 kubelet[718]: E1212 00:36:23.577785     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-crjgz_kubernetes-dashboard(28151ca5-a50c-44c1-84f5-59b8130789a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz" podUID="28151ca5-a50c-44c1-84f5-59b8130789a1"
	Dec 12 00:36:24 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:24.581764     718 scope.go:117] "RemoveContainer" containerID="3415f9618d3add40da85ce2e707d93bac239fa62cd640335d28034c55f66df52"
	Dec 12 00:36:24 default-k8s-diff-port-079970 kubelet[718]: E1212 00:36:24.581946     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-crjgz_kubernetes-dashboard(28151ca5-a50c-44c1-84f5-59b8130789a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz" podUID="28151ca5-a50c-44c1-84f5-59b8130789a1"
	Dec 12 00:36:25 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:25.583922     718 scope.go:117] "RemoveContainer" containerID="3415f9618d3add40da85ce2e707d93bac239fa62cd640335d28034c55f66df52"
	Dec 12 00:36:25 default-k8s-diff-port-079970 kubelet[718]: E1212 00:36:25.584080     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-crjgz_kubernetes-dashboard(28151ca5-a50c-44c1-84f5-59b8130789a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz" podUID="28151ca5-a50c-44c1-84f5-59b8130789a1"
	Dec 12 00:36:26 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:26.563047     718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 12 00:36:29 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:29.127601     718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-262gm" podStartSLOduration=4.010489259 podStartE2EDuration="10.127576957s" podCreationTimestamp="2025-12-12 00:36:19 +0000 UTC" firstStartedPulling="2025-12-12 00:36:19.949693338 +0000 UTC m=+7.561570286" lastFinishedPulling="2025-12-12 00:36:26.066781036 +0000 UTC m=+13.678657984" observedRunningTime="2025-12-12 00:36:26.601107357 +0000 UTC m=+14.212984326" watchObservedRunningTime="2025-12-12 00:36:29.127576957 +0000 UTC m=+16.739453925"
	Dec 12 00:36:39 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:39.501770     718 scope.go:117] "RemoveContainer" containerID="3415f9618d3add40da85ce2e707d93bac239fa62cd640335d28034c55f66df52"
	Dec 12 00:36:39 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:39.619923     718 scope.go:117] "RemoveContainer" containerID="3415f9618d3add40da85ce2e707d93bac239fa62cd640335d28034c55f66df52"
	Dec 12 00:36:39 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:39.620143     718 scope.go:117] "RemoveContainer" containerID="a3120bd02e313b946c0bf10f63c761c88d29e95451fecbc16013be1d819f3ee3"
	Dec 12 00:36:39 default-k8s-diff-port-079970 kubelet[718]: E1212 00:36:39.620367     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-crjgz_kubernetes-dashboard(28151ca5-a50c-44c1-84f5-59b8130789a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz" podUID="28151ca5-a50c-44c1-84f5-59b8130789a1"
	Dec 12 00:36:44 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:44.413538     718 scope.go:117] "RemoveContainer" containerID="a3120bd02e313b946c0bf10f63c761c88d29e95451fecbc16013be1d819f3ee3"
	Dec 12 00:36:44 default-k8s-diff-port-079970 kubelet[718]: E1212 00:36:44.413808     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-crjgz_kubernetes-dashboard(28151ca5-a50c-44c1-84f5-59b8130789a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz" podUID="28151ca5-a50c-44c1-84f5-59b8130789a1"
	Dec 12 00:36:47 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:47.643314     718 scope.go:117] "RemoveContainer" containerID="6ef4dec1d3a751038100ee6d4e57afdad852b8404aec051e32e0037be43000bd"
	Dec 12 00:36:55 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:55.501756     718 scope.go:117] "RemoveContainer" containerID="a3120bd02e313b946c0bf10f63c761c88d29e95451fecbc16013be1d819f3ee3"
	Dec 12 00:36:55 default-k8s-diff-port-079970 kubelet[718]: E1212 00:36:55.501930     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-crjgz_kubernetes-dashboard(28151ca5-a50c-44c1-84f5-59b8130789a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz" podUID="28151ca5-a50c-44c1-84f5-59b8130789a1"
	Dec 12 00:37:06 default-k8s-diff-port-079970 kubelet[718]: I1212 00:37:06.501342     718 scope.go:117] "RemoveContainer" containerID="a3120bd02e313b946c0bf10f63c761c88d29e95451fecbc16013be1d819f3ee3"
	Dec 12 00:37:06 default-k8s-diff-port-079970 kubelet[718]: I1212 00:37:06.696296     718 scope.go:117] "RemoveContainer" containerID="a3120bd02e313b946c0bf10f63c761c88d29e95451fecbc16013be1d819f3ee3"
	Dec 12 00:37:06 default-k8s-diff-port-079970 kubelet[718]: I1212 00:37:06.696545     718 scope.go:117] "RemoveContainer" containerID="28856be11c5cc482e16607fa0597a6aa67543ac2cdc0834d1bd6d2909c225289"
	Dec 12 00:37:06 default-k8s-diff-port-079970 kubelet[718]: E1212 00:37:06.696796     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-crjgz_kubernetes-dashboard(28151ca5-a50c-44c1-84f5-59b8130789a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz" podUID="28151ca5-a50c-44c1-84f5-59b8130789a1"
	Dec 12 00:37:11 default-k8s-diff-port-079970 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 00:37:11 default-k8s-diff-port-079970 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 00:37:11 default-k8s-diff-port-079970 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:37:11 default-k8s-diff-port-079970 systemd[1]: kubelet.service: Consumed 1.793s CPU time.
	
	
	==> kubernetes-dashboard [7ac4fbd2f6c8431af2b7ef1b0c5d94360ce07676d7c162a642abe8818be8add8] <==
	2025/12/12 00:36:26 Starting overwatch
	2025/12/12 00:36:26 Using namespace: kubernetes-dashboard
	2025/12/12 00:36:26 Using in-cluster config to connect to apiserver
	2025/12/12 00:36:26 Using secret token for csrf signing
	2025/12/12 00:36:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/12 00:36:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/12 00:36:26 Successful initial request to the apiserver, version: v1.34.2
	2025/12/12 00:36:26 Generating JWE encryption key
	2025/12/12 00:36:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/12 00:36:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/12 00:36:26 Initializing JWE encryption key from synchronized object
	2025/12/12 00:36:26 Creating in-cluster Sidecar client
	2025/12/12 00:36:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 00:36:26 Serving insecurely on HTTP port: 9090
	2025/12/12 00:36:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [666120bd1d8c90f7d6256aa0ec58755862735161bce1ffcf61fe971cc6d38d5c] <==
	I1212 00:36:47.711620       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:36:47.711670       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1212 00:36:47.714394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:36:51.169889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:36:55.430024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:36:59.030262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:02.084546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:05.110608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:05.123812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 00:37:05.123969       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:37:05.124222       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-079970_121cc942-fa41-4eff-8ebc-61d2435615ea!
	I1212 00:37:05.125591       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"80539c40-e5b1-4dda-83b4-30c234eea46b", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-079970_121cc942-fa41-4eff-8ebc-61d2435615ea became leader
	W1212 00:37:05.131850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:05.142695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 00:37:05.224685       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-079970_121cc942-fa41-4eff-8ebc-61d2435615ea!
	W1212 00:37:07.145317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:07.149818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:09.153071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:09.156827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:11.159848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:11.164984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:13.170121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:13.205292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:15.208873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:15.215890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6ef4dec1d3a751038100ee6d4e57afdad852b8404aec051e32e0037be43000bd] <==
	I1212 00:36:16.872827       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 00:36:46.874994       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-079970 -n default-k8s-diff-port-079970
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-079970 -n default-k8s-diff-port-079970: exit status 2 (374.703249ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-079970 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-079970
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-079970:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84",
	        "Created": "2025-12-12T00:35:00.648347206Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 323601,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:36:06.06895791Z",
	            "FinishedAt": "2025-12-12T00:36:05.182172786Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84/hostname",
	        "HostsPath": "/var/lib/docker/containers/d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84/hosts",
	        "LogPath": "/var/lib/docker/containers/d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84/d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84-json.log",
	        "Name": "/default-k8s-diff-port-079970",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-079970:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-079970",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d079df7029d4480b9b6f9a8bb19471882c9834331523cbf51ef02702fb6ffd84",
	                "LowerDir": "/var/lib/docker/overlay2/f4c9209913a7e2c4752af1f8012a218062093f65de9760f0c38133c56c619028-init/diff:/var/lib/docker/overlay2/c357d121f629e97e2f2fc3f8446a9ac24d3006c6e15dc5a7387f95af9819d8c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f4c9209913a7e2c4752af1f8012a218062093f65de9760f0c38133c56c619028/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f4c9209913a7e2c4752af1f8012a218062093f65de9760f0c38133c56c619028/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f4c9209913a7e2c4752af1f8012a218062093f65de9760f0c38133c56c619028/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-079970",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-079970/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-079970",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-079970",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-079970",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e0513f10991f90a23fe6ee841b1d3e10c76c7458f7a608c6a0239d2b7c6c657f",
	            "SandboxKey": "/var/run/docker/netns/e0513f10991f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-079970": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e9d719bd40fd60bd78307adede76477356bbd0153233e68c6cf65e5ad664376",
	                    "EndpointID": "05e4beef18006825e4121487513f41e6757530d4de1265afcfb1ffc4e0baa05b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "da:b1:b0:cf:84:86",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-079970",
	                        "d079df7029d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079970 -n default-k8s-diff-port-079970
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079970 -n default-k8s-diff-port-079970: exit status 2 (369.128448ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-079970 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-079970 logs -n 25: (1.210077659s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-129742 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                          │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo cat /etc/kubernetes/kubelet.conf                                                                                                         │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo cat /var/lib/kubelet/config.yaml                                                                                                         │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo systemctl status docker --all --full --no-pager                                                                                          │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │                     │
	│ ssh     │ -p kindnet-129742 sudo systemctl cat docker --no-pager                                                                                                          │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo cat /etc/docker/daemon.json                                                                                                              │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │                     │
	│ ssh     │ -p kindnet-129742 sudo docker system info                                                                                                                       │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │                     │
	│ ssh     │ -p kindnet-129742 sudo systemctl status cri-docker --all --full --no-pager                                                                                      │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │                     │
	│ ssh     │ -p kindnet-129742 sudo systemctl cat cri-docker --no-pager                                                                                                      │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                 │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │                     │
	│ ssh     │ -p kindnet-129742 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                           │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo cri-dockerd --version                                                                                                                    │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo systemctl status containerd --all --full --no-pager                                                                                      │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │                     │
	│ ssh     │ -p kindnet-129742 sudo systemctl cat containerd --no-pager                                                                                                      │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo cat /lib/systemd/system/containerd.service                                                                                               │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo cat /etc/containerd/config.toml                                                                                                          │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo containerd config dump                                                                                                                   │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo systemctl status crio --all --full --no-pager                                                                                            │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo systemctl cat crio --no-pager                                                                                                            │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                  │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ ssh     │ -p kindnet-129742 sudo crio config                                                                                                                              │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ image   │ default-k8s-diff-port-079970 image list --format=json                                                                                                           │ default-k8s-diff-port-079970 │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ delete  │ -p kindnet-129742                                                                                                                                               │ kindnet-129742               │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │ 12 Dec 25 00:37 UTC │
	│ pause   │ -p default-k8s-diff-port-079970 --alsologtostderr -v=1                                                                                                          │ default-k8s-diff-port-079970 │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │                     │
	│ start   │ -p enable-default-cni-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio │ enable-default-cni-129742    │ jenkins │ v1.37.0 │ 12 Dec 25 00:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:37:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:37:14.206452  345789 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:37:14.206740  345789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:37:14.206754  345789 out.go:374] Setting ErrFile to fd 2...
	I1212 00:37:14.206760  345789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:37:14.207038  345789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:37:14.207696  345789 out.go:368] Setting JSON to false
	I1212 00:37:14.209045  345789 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4780,"bootTime":1765495054,"procs":430,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:37:14.209098  345789 start.go:143] virtualization: kvm guest
	I1212 00:37:14.211006  345789 out.go:179] * [enable-default-cni-129742] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:37:14.212436  345789 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:37:14.212524  345789 notify.go:221] Checking for updates...
	I1212 00:37:14.214518  345789 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:37:14.215645  345789 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:37:14.216681  345789 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:37:14.217749  345789 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:37:14.221911  345789 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:37:14.223662  345789 config.go:182] Loaded profile config "calico-129742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:37:14.223800  345789 config.go:182] Loaded profile config "custom-flannel-129742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:37:14.223921  345789 config.go:182] Loaded profile config "default-k8s-diff-port-079970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:37:14.224045  345789 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:37:14.250888  345789 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:37:14.250977  345789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:37:14.310878  345789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 00:37:14.299354158 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:37:14.310999  345789 docker.go:319] overlay module found
	I1212 00:37:14.312509  345789 out.go:179] * Using the docker driver based on user configuration
	I1212 00:37:14.313906  345789 start.go:309] selected driver: docker
	I1212 00:37:14.313924  345789 start.go:927] validating driver "docker" against <nil>
	I1212 00:37:14.313939  345789 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:37:14.314450  345789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:37:14.374820  345789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 00:37:14.364661384 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:37:14.375000  345789 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E1212 00:37:14.375274  345789 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1212 00:37:14.375332  345789 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:37:14.376872  345789 out.go:179] * Using Docker driver with root privileges
	I1212 00:37:14.377970  345789 cni.go:84] Creating CNI manager for "bridge"
	I1212 00:37:14.377989  345789 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 00:37:14.378075  345789 start.go:353] cluster config:
	{Name:enable-default-cni-129742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-129742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:37:14.379298  345789 out.go:179] * Starting "enable-default-cni-129742" primary control-plane node in "enable-default-cni-129742" cluster
	I1212 00:37:14.380548  345789 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:37:14.381657  345789 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:37:14.382702  345789 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:37:14.382729  345789 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 00:37:14.382736  345789 cache.go:65] Caching tarball of preloaded images
	I1212 00:37:14.382800  345789 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:37:14.382839  345789 preload.go:238] Found /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:37:14.382850  345789 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 00:37:14.382925  345789 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/enable-default-cni-129742/config.json ...
	I1212 00:37:14.382943  345789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/enable-default-cni-129742/config.json: {Name:mk4ca03298f3b3b7fbb97aeb8319c6e566cfe38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:37:14.403422  345789 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:37:14.403439  345789 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:37:14.403455  345789 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:37:14.403500  345789 start.go:360] acquireMachinesLock for enable-default-cni-129742: {Name:mkfa556c62690653f13fdb715cf6d58a355613dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:37:14.403603  345789 start.go:364] duration metric: took 81.662µs to acquireMachinesLock for "enable-default-cni-129742"
	I1212 00:37:14.403634  345789 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-129742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-129742 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:37:14.403708  345789 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 12 00:36:39 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:39.552468401Z" level=info msg="Started container" PID=1744 containerID=a3120bd02e313b946c0bf10f63c761c88d29e95451fecbc16013be1d819f3ee3 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz/dashboard-metrics-scraper id=fed36fad-0aff-42ed-bb38-dbf399c485f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8fccf844463d7e6b74ca63826e6f3d7bd560533167c790e764153ece4ef3eaa
	Dec 12 00:36:39 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:39.621367985Z" level=info msg="Removing container: 3415f9618d3add40da85ce2e707d93bac239fa62cd640335d28034c55f66df52" id=798adbf1-5d77-4bbb-8601-ee997fab5f8c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:36:39 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:39.633246626Z" level=info msg="Removed container 3415f9618d3add40da85ce2e707d93bac239fa62cd640335d28034c55f66df52: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz/dashboard-metrics-scraper" id=798adbf1-5d77-4bbb-8601-ee997fab5f8c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.643792825Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ff0276f3-5bab-49a3-92b9-5df461875322 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.644895165Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=28ed8d79-6fa3-42ca-a68f-79cf3cbb314c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.646131472Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ce386fba-6495-489f-a8fd-65c30869153f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.646531333Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.651919766Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.652103577Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3e685fe0ecdd31b9103baf81ea93e3446a035e7dd63edbfc347ea06b82fa6787/merged/etc/passwd: no such file or directory"
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.652128597Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3e685fe0ecdd31b9103baf81ea93e3446a035e7dd63edbfc347ea06b82fa6787/merged/etc/group: no such file or directory"
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.652418143Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.686705118Z" level=info msg="Created container 666120bd1d8c90f7d6256aa0ec58755862735161bce1ffcf61fe971cc6d38d5c: kube-system/storage-provisioner/storage-provisioner" id=ce386fba-6495-489f-a8fd-65c30869153f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.687279421Z" level=info msg="Starting container: 666120bd1d8c90f7d6256aa0ec58755862735161bce1ffcf61fe971cc6d38d5c" id=80862c54-10f4-40a0-ac0f-6fb7624d52b6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:36:47 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:36:47.689206719Z" level=info msg="Started container" PID=1758 containerID=666120bd1d8c90f7d6256aa0ec58755862735161bce1ffcf61fe971cc6d38d5c description=kube-system/storage-provisioner/storage-provisioner id=80862c54-10f4-40a0-ac0f-6fb7624d52b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b34394bb51462fd48a96ce2324295a78423da44d8f753581b8fc33030c568e61
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.501799249Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3b506c53-96b9-4e3c-9a3e-4018755b8521 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.502775713Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=398b0828-618f-4c9b-8d07-7abe2c8fe1a0 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.503954328Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz/dashboard-metrics-scraper" id=06891259-e2d3-491b-bcf6-ef5c510b8a31 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.504088122Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.51073995Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.511289396Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.541687771Z" level=info msg="Created container 28856be11c5cc482e16607fa0597a6aa67543ac2cdc0834d1bd6d2909c225289: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz/dashboard-metrics-scraper" id=06891259-e2d3-491b-bcf6-ef5c510b8a31 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.542299881Z" level=info msg="Starting container: 28856be11c5cc482e16607fa0597a6aa67543ac2cdc0834d1bd6d2909c225289" id=a36911a0-cc46-4954-8b25-ff4bec9757c9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.544517868Z" level=info msg="Started container" PID=1794 containerID=28856be11c5cc482e16607fa0597a6aa67543ac2cdc0834d1bd6d2909c225289 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz/dashboard-metrics-scraper id=a36911a0-cc46-4954-8b25-ff4bec9757c9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8fccf844463d7e6b74ca63826e6f3d7bd560533167c790e764153ece4ef3eaa
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.697867553Z" level=info msg="Removing container: a3120bd02e313b946c0bf10f63c761c88d29e95451fecbc16013be1d819f3ee3" id=89fb73c7-8c52-45bd-a214-30ee169a944b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:37:06 default-k8s-diff-port-079970 crio[566]: time="2025-12-12T00:37:06.708245144Z" level=info msg="Removed container a3120bd02e313b946c0bf10f63c761c88d29e95451fecbc16013be1d819f3ee3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz/dashboard-metrics-scraper" id=89fb73c7-8c52-45bd-a214-30ee169a944b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	28856be11c5cc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   3                   e8fccf844463d       dashboard-metrics-scraper-6ffb444bf9-crjgz             kubernetes-dashboard
	666120bd1d8c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           29 seconds ago       Running             storage-provisioner         1                   b34394bb51462       storage-provisioner                                    kube-system
	7ac4fbd2f6c84       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   51 seconds ago       Running             kubernetes-dashboard        0                   4f9b9d4f610cb       kubernetes-dashboard-855c9754f9-262gm                  kubernetes-dashboard
	304099090ec96       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           About a minute ago   Running             coredns                     0                   07fb9ab25ca85       coredns-66bc5c9577-jdmpv                               kube-system
	69a084da2b60b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           About a minute ago   Running             busybox                     1                   846f024726f02       busybox                                                default
	6ef4dec1d3a75       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           About a minute ago   Exited              storage-provisioner         0                   b34394bb51462       storage-provisioner                                    kube-system
	a9a387380ed7b       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           About a minute ago   Running             kube-proxy                  0                   dda4291f49f73       kube-proxy-dp8fl                                       kube-system
	e419328ac2418       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           About a minute ago   Running             kindnet-cni                 0                   c525e9e9c652b       kindnet-g8hsv                                          kube-system
	d3e91eedf153b       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           About a minute ago   Running             kube-controller-manager     0                   5d00a63b60416       kube-controller-manager-default-k8s-diff-port-079970   kube-system
	089482e7f0285       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           About a minute ago   Running             kube-apiserver              0                   4e7974467f528       kube-apiserver-default-k8s-diff-port-079970            kube-system
	0e056232ea66f       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           About a minute ago   Running             kube-scheduler              0                   1143a339bcf9e       kube-scheduler-default-k8s-diff-port-079970            kube-system
	7f1db19be25d1       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           About a minute ago   Running             etcd                        0                   48337e5b28e82       etcd-default-k8s-diff-port-079970                      kube-system
	
	
	==> coredns [304099090ec96635a8679bb2021aa91454e4d7f75e1409b6f260e1b2ac58b4be] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58381 - 41838 "HINFO IN 7295719463501735635.1848477618576580803. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.476393395s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-079970
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-079970
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=default-k8s-diff-port-079970
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_35_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:35:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-079970
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:37:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:36:56 +0000   Fri, 12 Dec 2025 00:35:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:36:56 +0000   Fri, 12 Dec 2025 00:35:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:36:56 +0000   Fri, 12 Dec 2025 00:35:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:36:56 +0000   Fri, 12 Dec 2025 00:35:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-079970
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                2f6ae672-3816-4aaa-aade-b1dfd5ff98c4
	  Boot ID:                    57d8d546-0d15-41aa-b870-996b527c8f7a
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 coredns-66bc5c9577-jdmpv                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-default-k8s-diff-port-079970                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-g8hsv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-default-k8s-diff-port-079970             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-079970    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-dp8fl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-default-k8s-diff-port-079970             100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-crjgz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-262gm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 113s                 kube-proxy       
	  Normal  Starting                 60s                  kube-proxy       
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           115s                 node-controller  Node default-k8s-diff-port-079970 event: Registered Node default-k8s-diff-port-079970 in Controller
	  Normal  NodeReady                104s                 kubelet          Node default-k8s-diff-port-079970 status is now: NodeReady
	  Normal  Starting                 65s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)    kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)    kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 65s)    kubelet          Node default-k8s-diff-port-079970 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           58s                  node-controller  Node default-k8s-diff-port-079970 event: Registered Node default-k8s-diff-port-079970 in Controller
	
	
	==> dmesg <==
	[  +0.081440] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024195] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.827069] kauditd_printk_skb: 47 callbacks suppressed
	[Dec11 23:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.004902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023877] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +2.047806] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +4.031547] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[  +8.191155] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[ +16.382326] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	[Dec11 23:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 5a f0 bf 17 3a 96 f5 65 29 7e 4d 08 00
	
	
	==> etcd [7f1db19be25d152013d9ea3bf18c932eb1713990f4eee394405c0469cee33812] <==
	{"level":"warn","ts":"2025-12-12T00:36:14.932209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:14.948440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:14.958145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:14.966705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:14.978723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:14.986396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:14.993703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.000971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.008692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.016303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.023173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.030234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.037023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.044738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.051694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.059752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.068387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.075779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.084442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.091814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.108011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.125202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.134572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.145062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:36:15.199685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58358","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:37:17 up  1:19,  0 user,  load average: 5.14, 3.91, 2.41
	Linux default-k8s-diff-port-079970 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e419328ac2418f753aacb6859126cefff6ec94cc6fbef558bfa47c49a2581809] <==
	I1212 00:36:17.049530       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1212 00:36:17.049654       1 main.go:148] setting mtu 1500 for CNI 
	I1212 00:36:17.049669       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 00:36:17.049692       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T00:36:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 00:36:17.250331       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 00:36:17.250547       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 00:36:17.250565       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 00:36:17.267339       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 00:36:17.546269       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:36:17.546298       1 metrics.go:72] Registering metrics
	I1212 00:36:17.546589       1 controller.go:711] "Syncing nftables rules"
	I1212 00:36:27.250673       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:36:27.250747       1 main.go:301] handling current node
	I1212 00:36:37.256645       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:36:37.256682       1 main.go:301] handling current node
	I1212 00:36:47.250548       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:36:47.250588       1 main.go:301] handling current node
	I1212 00:36:57.250507       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:36:57.250544       1 main.go:301] handling current node
	I1212 00:37:07.250668       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:37:07.250707       1 main.go:301] handling current node
	I1212 00:37:17.250684       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 00:37:17.250731       1 main.go:301] handling current node
	
	
	==> kube-apiserver [089482e7f0285560f4ccccc8335ad1f7791ef2b6f9d70f836c5ec7be2488701b] <==
	I1212 00:36:15.770602       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1212 00:36:15.770582       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 00:36:15.771076       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1212 00:36:15.771213       1 aggregator.go:171] initial CRD sync complete...
	I1212 00:36:15.771230       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 00:36:15.771238       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 00:36:15.771246       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:36:15.771215       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1212 00:36:15.774958       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 00:36:15.790779       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 00:36:15.799908       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:36:15.811863       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1212 00:36:16.026516       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 00:36:16.054417       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:36:16.070145       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:36:16.076613       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:36:16.081910       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:36:16.110379       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.169.190"}
	I1212 00:36:16.119516       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.126.99"}
	I1212 00:36:16.667206       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 00:36:19.203049       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 00:36:19.503716       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:36:19.503716       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:36:19.552087       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 00:36:19.552087       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d3e91eedf153b18ff649b4cbb1342ffb304ec03a16f4b25db886370aa599f5bc] <==
	I1212 00:36:19.083265       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1212 00:36:19.086499       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 00:36:19.087811       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1212 00:36:19.098813       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 00:36:19.098836       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 00:36:19.098865       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1212 00:36:19.099866       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 00:36:19.099892       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1212 00:36:19.099924       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 00:36:19.099935       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 00:36:19.099954       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 00:36:19.100039       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 00:36:19.099930       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 00:36:19.100102       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-079970"
	I1212 00:36:19.100143       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1212 00:36:19.100203       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1212 00:36:19.101312       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 00:36:19.101332       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1212 00:36:19.105405       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 00:36:19.107539       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 00:36:19.107554       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 00:36:19.107563       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 00:36:19.109658       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1212 00:36:19.111012       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1212 00:36:19.125636       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [a9a387380ed7bb228e584e659c552c5abaad6b837f63fddb18537419bddd4ad3] <==
	I1212 00:36:16.913138       1 server_linux.go:53] "Using iptables proxy"
	I1212 00:36:16.991938       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 00:36:17.092274       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 00:36:17.092337       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1212 00:36:17.092437       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:36:17.109778       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:36:17.109823       1 server_linux.go:132] "Using iptables Proxier"
	I1212 00:36:17.114655       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:36:17.114999       1 server.go:527] "Version info" version="v1.34.2"
	I1212 00:36:17.115029       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:36:17.116122       1 config.go:309] "Starting node config controller"
	I1212 00:36:17.116142       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:36:17.116151       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 00:36:17.116196       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:36:17.116211       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:36:17.116307       1 config.go:200] "Starting service config controller"
	I1212 00:36:17.116322       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:36:17.116352       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:36:17.116361       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:36:17.217198       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 00:36:17.217235       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 00:36:17.217235       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [0e056232ea66fdd61191c4da75560d02835b72581f1d1cde64bdc7b6cae2fcf1] <==
	I1212 00:36:14.378630       1 serving.go:386] Generated self-signed cert in-memory
	W1212 00:36:15.696546       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 00:36:15.696755       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1212 00:36:15.696827       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 00:36:15.696838       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 00:36:15.727407       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1212 00:36:15.727447       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:36:15.732007       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:36:15.732058       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:36:15.733067       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 00:36:15.733545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 00:36:15.833165       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 00:36:23 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:23.577595     718 scope.go:117] "RemoveContainer" containerID="3415f9618d3add40da85ce2e707d93bac239fa62cd640335d28034c55f66df52"
	Dec 12 00:36:23 default-k8s-diff-port-079970 kubelet[718]: E1212 00:36:23.577785     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-crjgz_kubernetes-dashboard(28151ca5-a50c-44c1-84f5-59b8130789a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz" podUID="28151ca5-a50c-44c1-84f5-59b8130789a1"
	Dec 12 00:36:24 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:24.581764     718 scope.go:117] "RemoveContainer" containerID="3415f9618d3add40da85ce2e707d93bac239fa62cd640335d28034c55f66df52"
	Dec 12 00:36:24 default-k8s-diff-port-079970 kubelet[718]: E1212 00:36:24.581946     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-crjgz_kubernetes-dashboard(28151ca5-a50c-44c1-84f5-59b8130789a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz" podUID="28151ca5-a50c-44c1-84f5-59b8130789a1"
	Dec 12 00:36:25 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:25.583922     718 scope.go:117] "RemoveContainer" containerID="3415f9618d3add40da85ce2e707d93bac239fa62cd640335d28034c55f66df52"
	Dec 12 00:36:25 default-k8s-diff-port-079970 kubelet[718]: E1212 00:36:25.584080     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-crjgz_kubernetes-dashboard(28151ca5-a50c-44c1-84f5-59b8130789a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz" podUID="28151ca5-a50c-44c1-84f5-59b8130789a1"
	Dec 12 00:36:26 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:26.563047     718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 12 00:36:29 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:29.127601     718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-262gm" podStartSLOduration=4.010489259 podStartE2EDuration="10.127576957s" podCreationTimestamp="2025-12-12 00:36:19 +0000 UTC" firstStartedPulling="2025-12-12 00:36:19.949693338 +0000 UTC m=+7.561570286" lastFinishedPulling="2025-12-12 00:36:26.066781036 +0000 UTC m=+13.678657984" observedRunningTime="2025-12-12 00:36:26.601107357 +0000 UTC m=+14.212984326" watchObservedRunningTime="2025-12-12 00:36:29.127576957 +0000 UTC m=+16.739453925"
	Dec 12 00:36:39 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:39.501770     718 scope.go:117] "RemoveContainer" containerID="3415f9618d3add40da85ce2e707d93bac239fa62cd640335d28034c55f66df52"
	Dec 12 00:36:39 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:39.619923     718 scope.go:117] "RemoveContainer" containerID="3415f9618d3add40da85ce2e707d93bac239fa62cd640335d28034c55f66df52"
	Dec 12 00:36:39 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:39.620143     718 scope.go:117] "RemoveContainer" containerID="a3120bd02e313b946c0bf10f63c761c88d29e95451fecbc16013be1d819f3ee3"
	Dec 12 00:36:39 default-k8s-diff-port-079970 kubelet[718]: E1212 00:36:39.620367     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-crjgz_kubernetes-dashboard(28151ca5-a50c-44c1-84f5-59b8130789a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz" podUID="28151ca5-a50c-44c1-84f5-59b8130789a1"
	Dec 12 00:36:44 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:44.413538     718 scope.go:117] "RemoveContainer" containerID="a3120bd02e313b946c0bf10f63c761c88d29e95451fecbc16013be1d819f3ee3"
	Dec 12 00:36:44 default-k8s-diff-port-079970 kubelet[718]: E1212 00:36:44.413808     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-crjgz_kubernetes-dashboard(28151ca5-a50c-44c1-84f5-59b8130789a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz" podUID="28151ca5-a50c-44c1-84f5-59b8130789a1"
	Dec 12 00:36:47 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:47.643314     718 scope.go:117] "RemoveContainer" containerID="6ef4dec1d3a751038100ee6d4e57afdad852b8404aec051e32e0037be43000bd"
	Dec 12 00:36:55 default-k8s-diff-port-079970 kubelet[718]: I1212 00:36:55.501756     718 scope.go:117] "RemoveContainer" containerID="a3120bd02e313b946c0bf10f63c761c88d29e95451fecbc16013be1d819f3ee3"
	Dec 12 00:36:55 default-k8s-diff-port-079970 kubelet[718]: E1212 00:36:55.501930     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-crjgz_kubernetes-dashboard(28151ca5-a50c-44c1-84f5-59b8130789a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz" podUID="28151ca5-a50c-44c1-84f5-59b8130789a1"
	Dec 12 00:37:06 default-k8s-diff-port-079970 kubelet[718]: I1212 00:37:06.501342     718 scope.go:117] "RemoveContainer" containerID="a3120bd02e313b946c0bf10f63c761c88d29e95451fecbc16013be1d819f3ee3"
	Dec 12 00:37:06 default-k8s-diff-port-079970 kubelet[718]: I1212 00:37:06.696296     718 scope.go:117] "RemoveContainer" containerID="a3120bd02e313b946c0bf10f63c761c88d29e95451fecbc16013be1d819f3ee3"
	Dec 12 00:37:06 default-k8s-diff-port-079970 kubelet[718]: I1212 00:37:06.696545     718 scope.go:117] "RemoveContainer" containerID="28856be11c5cc482e16607fa0597a6aa67543ac2cdc0834d1bd6d2909c225289"
	Dec 12 00:37:06 default-k8s-diff-port-079970 kubelet[718]: E1212 00:37:06.696796     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-crjgz_kubernetes-dashboard(28151ca5-a50c-44c1-84f5-59b8130789a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-crjgz" podUID="28151ca5-a50c-44c1-84f5-59b8130789a1"
	Dec 12 00:37:11 default-k8s-diff-port-079970 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 00:37:11 default-k8s-diff-port-079970 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 00:37:11 default-k8s-diff-port-079970 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:37:11 default-k8s-diff-port-079970 systemd[1]: kubelet.service: Consumed 1.793s CPU time.
	
	
	==> kubernetes-dashboard [7ac4fbd2f6c8431af2b7ef1b0c5d94360ce07676d7c162a642abe8818be8add8] <==
	2025/12/12 00:36:26 Using namespace: kubernetes-dashboard
	2025/12/12 00:36:26 Using in-cluster config to connect to apiserver
	2025/12/12 00:36:26 Using secret token for csrf signing
	2025/12/12 00:36:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/12 00:36:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/12 00:36:26 Successful initial request to the apiserver, version: v1.34.2
	2025/12/12 00:36:26 Generating JWE encryption key
	2025/12/12 00:36:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/12 00:36:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/12 00:36:26 Initializing JWE encryption key from synchronized object
	2025/12/12 00:36:26 Creating in-cluster Sidecar client
	2025/12/12 00:36:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 00:36:26 Serving insecurely on HTTP port: 9090
	2025/12/12 00:36:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 00:36:26 Starting overwatch
	
	
	==> storage-provisioner [666120bd1d8c90f7d6256aa0ec58755862735161bce1ffcf61fe971cc6d38d5c] <==
	W1212 00:36:47.714394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:36:51.169889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:36:55.430024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:36:59.030262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:02.084546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:05.110608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:05.123812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 00:37:05.123969       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:37:05.124222       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-079970_121cc942-fa41-4eff-8ebc-61d2435615ea!
	I1212 00:37:05.125591       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"80539c40-e5b1-4dda-83b4-30c234eea46b", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-079970_121cc942-fa41-4eff-8ebc-61d2435615ea became leader
	W1212 00:37:05.131850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:05.142695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 00:37:05.224685       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-079970_121cc942-fa41-4eff-8ebc-61d2435615ea!
	W1212 00:37:07.145317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:07.149818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:09.153071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:09.156827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:11.159848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:11.164984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:13.170121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:13.205292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:15.208873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:15.215890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:17.219696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:37:17.225256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6ef4dec1d3a751038100ee6d4e57afdad852b8404aec051e32e0037be43000bd] <==
	I1212 00:36:16.872827       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 00:36:46.874994       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-079970 -n default-k8s-diff-port-079970
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-079970 -n default-k8s-diff-port-079970: exit status 2 (422.266262ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-079970 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.84s)

                                                
                                    

Test pass (354/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.02
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.2/json-events 3.11
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.19
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.38
30 TestBinaryMirror 0.79
31 TestOffline 62.62
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 123.44
40 TestAddons/serial/GCPAuth/Namespaces 0.11
41 TestAddons/serial/GCPAuth/FakeCredentials 8.4
57 TestAddons/StoppedEnableDisable 16.62
58 TestCertOptions 29.88
59 TestCertExpiration 210.75
61 TestForceSystemdFlag 26.74
62 TestForceSystemdEnv 21.61
67 TestErrorSpam/setup 21.22
68 TestErrorSpam/start 0.63
69 TestErrorSpam/status 0.9
70 TestErrorSpam/pause 6.54
71 TestErrorSpam/unpause 5.65
72 TestErrorSpam/stop 8.06
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 36.03
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 13.95
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.07
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.74
84 TestFunctional/serial/CacheCmd/cache/add_local 1.22
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.46
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.11
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 53.2
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.12
95 TestFunctional/serial/LogsFileCmd 1.14
96 TestFunctional/serial/InvalidService 3.72
98 TestFunctional/parallel/ConfigCmd 0.43
99 TestFunctional/parallel/DashboardCmd 6.47
100 TestFunctional/parallel/DryRun 0.38
101 TestFunctional/parallel/InternationalLanguage 0.16
102 TestFunctional/parallel/StatusCmd 0.92
106 TestFunctional/parallel/ServiceCmdConnect 10.64
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 24.72
110 TestFunctional/parallel/SSHCmd 0.82
111 TestFunctional/parallel/CpCmd 1.87
112 TestFunctional/parallel/MySQL 22.2
113 TestFunctional/parallel/FileSync 0.3
114 TestFunctional/parallel/CertSync 1.75
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
122 TestFunctional/parallel/License 0.44
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.35
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
127 TestFunctional/parallel/ImageCommands/ImageBuild 2.66
128 TestFunctional/parallel/ImageCommands/Setup 1.01
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.55
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.28
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.71
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.69
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
150 TestFunctional/parallel/ServiceCmd/DeployApp 7.13
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
152 TestFunctional/parallel/ProfileCmd/profile_list 0.39
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
154 TestFunctional/parallel/MountCmd/any-port 6.06
155 TestFunctional/parallel/ServiceCmd/List 1.72
156 TestFunctional/parallel/Version/short 0.07
157 TestFunctional/parallel/Version/components 0.53
158 TestFunctional/parallel/ServiceCmd/JSONOutput 1.81
159 TestFunctional/parallel/MountCmd/specific-port 2.18
160 TestFunctional/parallel/ServiceCmd/HTTPS 0.58
161 TestFunctional/parallel/ServiceCmd/Format 0.67
162 TestFunctional/parallel/MountCmd/VerifyCleanup 1.71
163 TestFunctional/parallel/ServiceCmd/URL 0.55
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.01
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 38.43
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 11.32
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.52
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.19
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.28
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.49
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 44.83
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.06
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.13
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.14
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.02
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.45
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 5.27
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.43
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.18
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.94
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 7.69
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 21.78
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.54
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.77
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 23.35
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.28
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.65
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.06
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.63
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.42
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.47
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 7.18
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.41
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.54
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.46
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.26
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.25
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.24
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 1.32
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 2.27
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.38
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.11
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 10.2
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.81
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.59
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.32
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.48
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.59
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.4
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.36
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 5.74
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.31
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.33
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.34
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.36
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.98
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.16
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.17
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.16
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.74
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.03
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.01
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.01
265 TestMultiControlPlane/serial/StartCluster 133.51
266 TestMultiControlPlane/serial/DeployApp 4.24
267 TestMultiControlPlane/serial/PingHostFromPods 1
268 TestMultiControlPlane/serial/AddWorkerNode 23.32
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
271 TestMultiControlPlane/serial/CopyFile 16.41
272 TestMultiControlPlane/serial/StopSecondaryNode 14.2
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
274 TestMultiControlPlane/serial/RestartSecondaryNode 8.36
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 108.83
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.44
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
279 TestMultiControlPlane/serial/StopCluster 42.95
280 TestMultiControlPlane/serial/RestartCluster 53.59
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
282 TestMultiControlPlane/serial/AddSecondaryNode 68.95
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
288 TestJSONOutput/start/Command 34.63
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 6.26
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.22
313 TestKicCustomNetwork/create_custom_network 28.78
314 TestKicCustomNetwork/use_default_bridge_network 21.55
315 TestKicExistingNetwork 25.9
316 TestKicCustomSubnet 22.34
317 TestKicStaticIP 26.55
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 43.96
322 TestMountStart/serial/StartWithMountFirst 7.6
323 TestMountStart/serial/VerifyMountFirst 0.26
324 TestMountStart/serial/StartWithMountSecond 7.67
325 TestMountStart/serial/VerifyMountSecond 0.26
326 TestMountStart/serial/DeleteFirst 1.65
327 TestMountStart/serial/VerifyMountPostDelete 0.25
328 TestMountStart/serial/Stop 1.25
329 TestMountStart/serial/RestartStopped 7.52
330 TestMountStart/serial/VerifyMountPostStop 0.26
333 TestMultiNode/serial/FreshStart2Nodes 89.13
334 TestMultiNode/serial/DeployApp2Nodes 2.43
335 TestMultiNode/serial/PingHostFrom2Pods 0.7
336 TestMultiNode/serial/AddNode 22.84
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.62
339 TestMultiNode/serial/CopyFile 9.43
340 TestMultiNode/serial/StopNode 2.21
341 TestMultiNode/serial/StartAfterStop 6.95
342 TestMultiNode/serial/RestartKeepsNodes 57.24
343 TestMultiNode/serial/DeleteNode 4.93
344 TestMultiNode/serial/StopMultiNode 17.53
345 TestMultiNode/serial/RestartMultiNode 44.93
346 TestMultiNode/serial/ValidateNameConflict 22.05
351 TestPreload 110.43
353 TestScheduledStopUnix 99.16
356 TestInsufficientStorage 11.48
357 TestRunningBinaryUpgrade 316.81
359 TestKubernetesUpgrade 302.66
360 TestMissingContainerUpgrade 70.04
361 TestStoppedBinaryUpgrade/Setup 0.77
363 TestPause/serial/Start 85.85
364 TestStoppedBinaryUpgrade/Upgrade 306
373 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
374 TestNoKubernetes/serial/StartWithK8s 22.82
375 TestPause/serial/SecondStartNoReconfiguration 7.5
384 TestNetworkPlugins/group/false 3.63
385 TestNoKubernetes/serial/StartWithStopK8s 6.68
389 TestNoKubernetes/serial/Start 6.01
390 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
391 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
392 TestNoKubernetes/serial/ProfileList 31.36
393 TestNoKubernetes/serial/Stop 1.27
394 TestNoKubernetes/serial/StartNoArgs 6.34
395 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
396 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
398 TestStartStop/group/old-k8s-version/serial/FirstStart 49.57
400 TestStartStop/group/no-preload/serial/FirstStart 49.07
402 TestStartStop/group/embed-certs/serial/FirstStart 41.76
403 TestStartStop/group/old-k8s-version/serial/DeployApp 8.24
404 TestStartStop/group/no-preload/serial/DeployApp 8.23
406 TestStartStop/group/old-k8s-version/serial/Stop 15.98
408 TestStartStop/group/embed-certs/serial/DeployApp 7.23
409 TestStartStop/group/no-preload/serial/Stop 18.12
411 TestStartStop/group/embed-certs/serial/Stop 16.18
412 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
413 TestStartStop/group/old-k8s-version/serial/SecondStart 25.86
414 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
415 TestStartStop/group/no-preload/serial/SecondStart 26.4
416 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
417 TestStartStop/group/embed-certs/serial/SecondStart 46.67
418 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 8.01
419 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
420 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
421 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
423 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
424 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
427 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.2
429 TestStartStop/group/newest-cni/serial/FirstStart 24.68
430 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
431 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
432 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
434 TestStartStop/group/newest-cni/serial/DeployApp 0
436 TestStartStop/group/newest-cni/serial/Stop 8
437 TestNetworkPlugins/group/auto/Start 41.12
438 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
439 TestStartStop/group/newest-cni/serial/SecondStart 10.11
440 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
442 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
443 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
444 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
446 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.14
447 TestNetworkPlugins/group/kindnet/Start 42.15
448 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
449 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 53.18
450 TestNetworkPlugins/group/auto/KubeletFlags 0.32
451 TestNetworkPlugins/group/auto/NetCatPod 9.24
452 TestNetworkPlugins/group/auto/DNS 0.15
453 TestNetworkPlugins/group/auto/Localhost 0.13
454 TestNetworkPlugins/group/auto/HairPin 0.12
455 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
456 TestNetworkPlugins/group/calico/Start 59.28
457 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
458 TestNetworkPlugins/group/kindnet/NetCatPod 9.23
459 TestNetworkPlugins/group/kindnet/DNS 0.12
460 TestNetworkPlugins/group/kindnet/Localhost 0.09
461 TestNetworkPlugins/group/kindnet/HairPin 0.09
462 TestNetworkPlugins/group/custom-flannel/Start 49.1
463 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
464 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
465 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
467 TestNetworkPlugins/group/enable-default-cni/Start 62.87
468 TestNetworkPlugins/group/flannel/Start 47.95
469 TestNetworkPlugins/group/calico/ControllerPod 6
470 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
471 TestNetworkPlugins/group/calico/KubeletFlags 0.31
472 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
473 TestNetworkPlugins/group/calico/NetCatPod 9.17
474 TestNetworkPlugins/group/custom-flannel/DNS 0.1
475 TestNetworkPlugins/group/custom-flannel/Localhost 0.08
476 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
477 TestNetworkPlugins/group/calico/DNS 0.12
478 TestNetworkPlugins/group/calico/Localhost 0.09
479 TestNetworkPlugins/group/calico/HairPin 0.09
480 TestNetworkPlugins/group/flannel/ControllerPod 6.01
481 TestNetworkPlugins/group/bridge/Start 63.31
482 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
483 TestNetworkPlugins/group/flannel/NetCatPod 11.18
484 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
485 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.67
486 TestNetworkPlugins/group/flannel/DNS 0.11
487 TestNetworkPlugins/group/flannel/Localhost 0.09
488 TestNetworkPlugins/group/flannel/HairPin 0.1
489 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
490 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
491 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
492 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
493 TestNetworkPlugins/group/bridge/NetCatPod 9.16
494 TestNetworkPlugins/group/bridge/DNS 0.11
495 TestNetworkPlugins/group/bridge/Localhost 0.08
496 TestNetworkPlugins/group/bridge/HairPin 0.08
x
+
TestDownloadOnly/v1.28.0/json-events (4.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-628337 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-628337 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.014695115s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1211 23:55:27.915760   14503 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1211 23:55:27.915827   14503 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-628337
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-628337: exit status 85 (69.270884ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-628337 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-628337 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 23:55:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:55:23.952586   14516 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:55:23.952772   14516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:23.952780   14516 out.go:374] Setting ErrFile to fd 2...
	I1211 23:55:23.952784   14516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:23.953387   14516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	W1211 23:55:23.953517   14516 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22101-10975/.minikube/config/config.json: open /home/jenkins/minikube-integration/22101-10975/.minikube/config/config.json: no such file or directory
	I1211 23:55:23.953948   14516 out.go:368] Setting JSON to true
	I1211 23:55:23.954803   14516 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2270,"bootTime":1765495054,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:55:23.954854   14516 start.go:143] virtualization: kvm guest
	I1211 23:55:23.959336   14516 out.go:99] [download-only-628337] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1211 23:55:23.959440   14516 notify.go:221] Checking for updates...
	W1211 23:55:23.959453   14516 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball: no such file or directory
	I1211 23:55:23.960516   14516 out.go:171] MINIKUBE_LOCATION=22101
	I1211 23:55:23.961536   14516 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:55:23.962664   14516 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1211 23:55:23.963599   14516 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1211 23:55:23.964560   14516 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1211 23:55:23.966330   14516 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1211 23:55:23.966537   14516 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 23:55:23.988832   14516 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1211 23:55:23.988903   14516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:55:24.211556   14516 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-11 23:55:24.201072147 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1211 23:55:24.211648   14516 docker.go:319] overlay module found
	I1211 23:55:24.213026   14516 out.go:99] Using the docker driver based on user configuration
	I1211 23:55:24.213051   14516 start.go:309] selected driver: docker
	I1211 23:55:24.213057   14516 start.go:927] validating driver "docker" against <nil>
	I1211 23:55:24.213137   14516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:55:24.266951   14516 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-11 23:55:24.257001084 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1211 23:55:24.267100   14516 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1211 23:55:24.268079   14516 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1211 23:55:24.268223   14516 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 23:55:24.269780   14516 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-628337 host does not exist
	  To start a cluster, run: "minikube start -p download-only-628337"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-628337
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-422944 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-422944 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.107960076s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1211 23:55:31.443897   14503 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1211 23:55:31.443944   14503 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-422944
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-422944: exit status 85 (69.80629ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-628337 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-628337 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-628337                                                                                                                                                   │ download-only-628337 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ -o=json --download-only -p download-only-422944 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-422944 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 23:55:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:55:28.385388   14874 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:55:28.385656   14874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:28.385667   14874 out.go:374] Setting ErrFile to fd 2...
	I1211 23:55:28.385671   14874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:28.385884   14874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:55:28.386308   14874 out.go:368] Setting JSON to true
	I1211 23:55:28.387101   14874 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2274,"bootTime":1765495054,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:55:28.387149   14874 start.go:143] virtualization: kvm guest
	I1211 23:55:28.388852   14874 out.go:99] [download-only-422944] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1211 23:55:28.389112   14874 notify.go:221] Checking for updates...
	I1211 23:55:28.390744   14874 out.go:171] MINIKUBE_LOCATION=22101
	I1211 23:55:28.392104   14874 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:55:28.393211   14874 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1211 23:55:28.397975   14874 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1211 23:55:28.399130   14874 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1211 23:55:28.401197   14874 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1211 23:55:28.401410   14874 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 23:55:28.423373   14874 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1211 23:55:28.423423   14874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:55:28.475891   14874 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-11 23:55:28.467019914 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1211 23:55:28.475994   14874 docker.go:319] overlay module found
	I1211 23:55:28.477383   14874 out.go:99] Using the docker driver based on user configuration
	I1211 23:55:28.477404   14874 start.go:309] selected driver: docker
	I1211 23:55:28.477418   14874 start.go:927] validating driver "docker" against <nil>
	I1211 23:55:28.477518   14874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:55:28.528359   14874 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-11 23:55:28.519527907 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1211 23:55:28.528523   14874 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1211 23:55:28.528975   14874 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1211 23:55:28.529123   14874 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 23:55:28.530633   14874 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-422944 host does not exist
	  To start a cluster, run: "minikube start -p download-only-422944"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-422944
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-712196 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-712196 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.186692592s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1211 23:55:35.068781   14503 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1211 23:55:35.068821   14503 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-712196
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-712196: exit status 85 (69.143783ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-628337 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-628337 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-628337                                                                                                                                                          │ download-only-628337 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ -o=json --download-only -p download-only-422944 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-422944 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-422944                                                                                                                                                          │ download-only-422944 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ -o=json --download-only -p download-only-712196 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-712196 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 23:55:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:55:31.933231   15216 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:55:31.933507   15216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:31.933518   15216 out.go:374] Setting ErrFile to fd 2...
	I1211 23:55:31.933524   15216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:31.933717   15216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1211 23:55:31.934172   15216 out.go:368] Setting JSON to true
	I1211 23:55:31.934962   15216 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2278,"bootTime":1765495054,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:55:31.935016   15216 start.go:143] virtualization: kvm guest
	I1211 23:55:31.936728   15216 out.go:99] [download-only-712196] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1211 23:55:31.936880   15216 notify.go:221] Checking for updates...
	I1211 23:55:31.937895   15216 out.go:171] MINIKUBE_LOCATION=22101
	I1211 23:55:31.939026   15216 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:55:31.940116   15216 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1211 23:55:31.941252   15216 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1211 23:55:31.942384   15216 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1211 23:55:31.944372   15216 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1211 23:55:31.944624   15216 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 23:55:31.966985   15216 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1211 23:55:31.967097   15216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:55:32.018353   15216 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-11 23:55:32.009901834 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1211 23:55:32.018450   15216 docker.go:319] overlay module found
	I1211 23:55:32.019891   15216 out.go:99] Using the docker driver based on user configuration
	I1211 23:55:32.019910   15216 start.go:309] selected driver: docker
	I1211 23:55:32.019915   15216 start.go:927] validating driver "docker" against <nil>
	I1211 23:55:32.019986   15216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:55:32.072616   15216 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-11 23:55:32.063624986 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1211 23:55:32.072853   15216 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1211 23:55:32.073545   15216 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1211 23:55:32.073731   15216 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 23:55:32.075331   15216 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-712196 host does not exist
	  To start a cluster, run: "minikube start -p download-only-712196"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-712196
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-646254 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-646254" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-646254
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I1211 23:55:36.273859   14503 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-307462 --alsologtostderr --binary-mirror http://127.0.0.1:33495 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-307462" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-307462
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (62.62s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-101842 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-101842 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m0.247034944s)
helpers_test.go:176: Cleaning up "offline-crio-101842" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-101842
E1212 00:28:36.949849   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-101842: (2.374930496s)
--- PASS: TestOffline (62.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-758245
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-758245: exit status 85 (62.347833ms)

                                                
                                                
-- stdout --
	* Profile "addons-758245" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-758245"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-758245
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-758245: exit status 85 (66.128868ms)

                                                
                                                
-- stdout --
	* Profile "addons-758245" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-758245"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (123.44s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-758245 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-758245 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m3.43657126s)
--- PASS: TestAddons/Setup (123.44s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-758245 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-758245 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.4s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-758245 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-758245 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [66f33ee0-430e-4cba-bcc7-3d37526bc70d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [66f33ee0-430e-4cba-bcc7-3d37526bc70d] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.002939595s
addons_test.go:696: (dbg) Run:  kubectl --context addons-758245 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-758245 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-758245 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.62s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-758245
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-758245: (16.346802536s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-758245
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-758245
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-758245
--- PASS: TestAddons/StoppedEnableDisable (16.62s)

                                                
                                    
x
+
TestCertOptions (29.88s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-319518 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-319518 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.71328518s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-319518 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-319518 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-319518 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-319518" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-319518
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-319518: (3.404619399s)
--- PASS: TestCertOptions (29.88s)

                                                
                                    
x
+
TestCertExpiration (210.75s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-673665 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-673665 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (18.528055417s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-673665 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-673665 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (7.146613484s)
helpers_test.go:176: Cleaning up "cert-expiration-673665" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-673665
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-673665: (5.07490849s)
--- PASS: TestCertExpiration (210.75s)

                                                
                                    
x
+
TestForceSystemdFlag (26.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-610815 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-610815 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.049753154s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-610815 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-610815" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-610815
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-610815: (2.396057481s)
--- PASS: TestForceSystemdFlag (26.74s)

                                                
                                    
x
+
TestForceSystemdEnv (21.61s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-551801 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-551801 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (19.242129508s)
helpers_test.go:176: Cleaning up "force-systemd-env-551801" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-551801
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-551801: (2.369400329s)
--- PASS: TestForceSystemdEnv (21.61s)

                                                
                                    
x
+
TestErrorSpam/setup (21.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-138768 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-138768 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-138768 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-138768 --driver=docker  --container-runtime=crio: (21.223756112s)
--- PASS: TestErrorSpam/setup (21.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 status
--- PASS: TestErrorSpam/status (0.90s)

                                                
                                    
x
+
TestErrorSpam/pause (6.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 pause: exit status 80 (1.89200703s)

                                                
                                                
-- stdout --
	* Pausing node nospam-138768 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:01:17Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 pause: exit status 80 (2.351391782s)

                                                
                                                
-- stdout --
	* Pausing node nospam-138768 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:01:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 pause: exit status 80 (2.296795886s)

                                                
                                                
-- stdout --
	* Pausing node nospam-138768 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:01:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 unpause: exit status 80 (2.231411106s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-138768 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:01:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 unpause: exit status 80 (1.793218092s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-138768 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:01:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 unpause: exit status 80 (1.622520535s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-138768 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:01:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.65s)

                                                
                                    
x
+
TestErrorSpam/stop (8.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 stop: (7.859076601s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138768 --log_dir /tmp/nospam-138768 stop
--- PASS: TestErrorSpam/stop (8.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/test/nested/copy/14503/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (36.03s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896350 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-896350 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (36.026620661s)
--- PASS: TestFunctional/serial/StartWithProxy (36.03s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (13.95s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1212 00:02:16.918571   14503 config.go:182] Loaded profile config "functional-896350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896350 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-896350 --alsologtostderr -v=8: (13.952106441s)
functional_test.go:678: soft start took 13.952887397s for "functional-896350" cluster.
I1212 00:02:30.871053   14503 config.go:182] Loaded profile config "functional-896350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (13.95s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-896350 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-896350 cache add registry.k8s.io/pause:3.3: (1.003167255s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-896350 /tmp/TestFunctionalserialCacheCmdcacheadd_local2121500014/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 cache add minikube-local-cache-test:functional-896350
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 cache delete minikube-local-cache-test:functional-896350
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-896350
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896350 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (268.362634ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 kubectl -- --context functional-896350 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-896350 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (53.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896350 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 00:02:41.374333   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:02:41.380736   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:02:41.392032   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:02:41.413345   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:02:41.454660   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:02:41.536023   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:02:41.697511   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:02:42.019149   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:02:42.661131   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:02:43.942598   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:02:46.504593   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:02:51.625939   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:03:01.867619   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:03:22.348952   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-896350 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (53.195087752s)
functional_test.go:776: restart took 53.195189235s for "functional-896350" cluster.
I1212 00:03:30.346843   14503 config.go:182] Loaded profile config "functional-896350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (53.20s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-896350 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-896350 logs: (1.119732129s)
--- PASS: TestFunctional/serial/LogsCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 logs --file /tmp/TestFunctionalserialLogsFileCmd1898537386/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-896350 logs --file /tmp/TestFunctionalserialLogsFileCmd1898537386/001/logs.txt: (1.139436471s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.72s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-896350 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-896350
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-896350: exit status 115 (325.780119ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31899 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-896350 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.72s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896350 config get cpus: exit status 14 (69.078756ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896350 config get cpus: exit status 14 (72.504645ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-896350 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-896350 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 52536: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896350 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-896350 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (164.459682ms)

                                                
                                                
-- stdout --
	* [functional-896350] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:04:00.495003   51989 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:04:00.495104   51989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:04:00.495115   51989 out.go:374] Setting ErrFile to fd 2...
	I1212 00:04:00.495121   51989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:04:00.495360   51989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:04:00.495820   51989 out.go:368] Setting JSON to false
	I1212 00:04:00.496859   51989 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2786,"bootTime":1765495054,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:04:00.496927   51989 start.go:143] virtualization: kvm guest
	I1212 00:04:00.498634   51989 out.go:179] * [functional-896350] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:04:00.499663   51989 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:04:00.499664   51989 notify.go:221] Checking for updates...
	I1212 00:04:00.502183   51989 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:04:00.503295   51989 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:04:00.504442   51989 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:04:00.505548   51989 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:04:00.506567   51989 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:04:00.508135   51989 config.go:182] Loaded profile config "functional-896350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:04:00.508778   51989 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:04:00.535102   51989 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:04:00.535198   51989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:04:00.592229   51989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-12 00:04:00.583574165 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:04:00.592335   51989 docker.go:319] overlay module found
	I1212 00:04:00.593910   51989 out.go:179] * Using the docker driver based on existing profile
	I1212 00:04:00.595308   51989 start.go:309] selected driver: docker
	I1212 00:04:00.595328   51989 start.go:927] validating driver "docker" against &{Name:functional-896350 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-896350 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:04:00.595435   51989 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:04:00.597163   51989 out.go:203] 
	W1212 00:04:00.598547   51989 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 00:04:00.599567   51989 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896350 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896350 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-896350 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (156.969187ms)

                                                
                                                
-- stdout --
	* [functional-896350] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:04:00.338219   51870 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:04:00.338313   51870 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:04:00.338326   51870 out.go:374] Setting ErrFile to fd 2...
	I1212 00:04:00.338333   51870 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:04:00.338652   51870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:04:00.339032   51870 out.go:368] Setting JSON to false
	I1212 00:04:00.340046   51870 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2786,"bootTime":1765495054,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:04:00.340101   51870 start.go:143] virtualization: kvm guest
	I1212 00:04:00.342043   51870 out.go:179] * [functional-896350] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1212 00:04:00.343603   51870 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:04:00.343636   51870 notify.go:221] Checking for updates...
	I1212 00:04:00.345828   51870 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:04:00.347527   51870 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:04:00.348668   51870 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:04:00.349735   51870 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:04:00.350904   51870 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:04:00.352333   51870 config.go:182] Loaded profile config "functional-896350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:04:00.352833   51870 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:04:00.375599   51870 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:04:00.375748   51870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:04:00.428864   51870 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-12 00:04:00.418901007 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:04:00.428954   51870 docker.go:319] overlay module found
	I1212 00:04:00.430426   51870 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1212 00:04:00.431506   51870 start.go:309] selected driver: docker
	I1212 00:04:00.431520   51870 start.go:927] validating driver "docker" against &{Name:functional-896350 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-896350 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:04:00.431622   51870 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:04:00.433255   51870 out.go:203] 
	W1212 00:04:00.434338   51870 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 00:04:00.435386   51870 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-896350 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-896350 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-6g4tx" [80e7d537-1cf1-4e97-a769-56a40de0286c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-6g4tx" [80e7d537-1cf1-4e97-a769-56a40de0286c] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.002840093s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30470
functional_test.go:1680: http://192.168.49.2:30470: success! body:
Request served by hello-node-connect-7d85dfc575-6g4tx

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30470
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [48e5338e-3b68-4834-acbf-e04b281d0290] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002358854s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-896350 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-896350 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-896350 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-896350 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [0c5447a3-1967-41fe-8dc7-b73278de8063] Pending
helpers_test.go:353: "sp-pod" [0c5447a3-1967-41fe-8dc7-b73278de8063] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [0c5447a3-1967-41fe-8dc7-b73278de8063] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004194735s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-896350 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-896350 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-896350 delete -f testdata/storage-provisioner/pod.yaml: (1.022024187s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-896350 apply -f testdata/storage-provisioner/pod.yaml
I1212 00:03:57.570949   14503 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [e3fcc527-7bf9-43ba-a916-77a0e5c14ed2] Pending
helpers_test.go:353: "sp-pod" [e3fcc527-7bf9-43ba-a916-77a0e5c14ed2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003008445s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-896350 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.72s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh -n functional-896350 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 cp functional-896350:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd515105304/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh -n functional-896350 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh -n functional-896350 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-896350 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-pgg25" [c139f8c9-3386-4141-b148-db2b3bbc54d2] Pending
helpers_test.go:353: "mysql-6bcdcbc558-pgg25" [c139f8c9-3386-4141-b148-db2b3bbc54d2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-pgg25" [c139f8c9-3386-4141-b148-db2b3bbc54d2] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.003205166s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-896350 exec mysql-6bcdcbc558-pgg25 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-896350 exec mysql-6bcdcbc558-pgg25 -- mysql -ppassword -e "show databases;": exit status 1 (102.622185ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 00:03:52.054590   14503 retry.go:31] will retry after 777.557184ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-896350 exec mysql-6bcdcbc558-pgg25 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-896350 exec mysql-6bcdcbc558-pgg25 -- mysql -ppassword -e "show databases;": exit status 1 (90.625142ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 00:03:52.923190   14503 retry.go:31] will retry after 1.636315307s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-896350 exec mysql-6bcdcbc558-pgg25 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-896350 exec mysql-6bcdcbc558-pgg25 -- mysql -ppassword -e "show databases;": exit status 1 (83.47092ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 00:03:54.644109   14503 retry.go:31] will retry after 1.836638375s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-896350 exec mysql-6bcdcbc558-pgg25 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-896350 exec mysql-6bcdcbc558-pgg25 -- mysql -ppassword -e "show databases;": exit status 1 (95.976334ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 00:03:56.577047   14503 retry.go:31] will retry after 2.286686126s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-896350 exec mysql-6bcdcbc558-pgg25 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.20s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/14503/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "sudo cat /etc/test/nested/copy/14503/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/14503.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "sudo cat /etc/ssl/certs/14503.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/14503.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "sudo cat /usr/share/ca-certificates/14503.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/145032.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "sudo cat /etc/ssl/certs/145032.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/145032.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "sudo cat /usr/share/ca-certificates/145032.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-896350 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896350 ssh "sudo systemctl is-active docker": exit status 1 (316.650028ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896350 ssh "sudo systemctl is-active containerd": exit status 1 (346.270605ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-896350 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-896350
localhost/kicbase/echo-server:functional-896350
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-896350 image ls --format short --alsologtostderr:
I1212 00:04:05.433569   53830 out.go:360] Setting OutFile to fd 1 ...
I1212 00:04:05.433871   53830 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:04:05.433884   53830 out.go:374] Setting ErrFile to fd 2...
I1212 00:04:05.433890   53830 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:04:05.434142   53830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
I1212 00:04:05.434960   53830 config.go:182] Loaded profile config "functional-896350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:04:05.435090   53830 config.go:182] Loaded profile config "functional-896350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:04:05.435687   53830 cli_runner.go:164] Run: docker container inspect functional-896350 --format={{.State.Status}}
I1212 00:04:05.458907   53830 ssh_runner.go:195] Run: systemctl --version
I1212 00:04:05.458962   53830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-896350
I1212 00:04:05.481053   53830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/functional-896350/id_rsa Username:docker}
I1212 00:04:05.583747   53830 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-896350 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-896350  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ localhost/minikube-local-cache-test     │ functional-896350  │ ef0ece07f1c3a │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-896350 image ls --format table --alsologtostderr:
I1212 00:04:07.713057   55258 out.go:360] Setting OutFile to fd 1 ...
I1212 00:04:07.713355   55258 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:04:07.713366   55258 out.go:374] Setting ErrFile to fd 2...
I1212 00:04:07.713371   55258 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:04:07.713647   55258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
I1212 00:04:07.714374   55258 config.go:182] Loaded profile config "functional-896350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:04:07.714534   55258 config.go:182] Loaded profile config "functional-896350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:04:07.715188   55258 cli_runner.go:164] Run: docker container inspect functional-896350 --format={{.State.Status}}
I1212 00:04:07.738880   55258 ssh_runner.go:195] Run: systemctl --version
I1212 00:04:07.738935   55258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-896350
I1212 00:04:07.760314   55258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/functional-896350/id_rsa Username:docker}
I1212 00:04:07.860929   55258 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-896350 image ls --format json --alsologtostderr:
[{"id":"ef0ece07f1c3a21b2646c5be74433ce93c486662bcb9e4bcc05338971395cdff","repoDigests":["localhost/minikube-local-cache-test@sha256:88498cb2a94d39707097933ed9634dd42a737e4f439130f46165f26ac77900af"],"repoTags":["localhost/minikube-local-cache-test:functional-896350"],"size":"3330"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f
5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-896350"],"size":"494
3877"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4d
ddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f
588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/librar
y/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"
},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-896350 image ls --format json --alsologtostderr:
I1212 00:04:07.422376   54825 out.go:360] Setting OutFile to fd 1 ...
I1212 00:04:07.422623   54825 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:04:07.422631   54825 out.go:374] Setting ErrFile to fd 2...
I1212 00:04:07.422638   54825 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:04:07.422992   54825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
I1212 00:04:07.424801   54825 config.go:182] Loaded profile config "functional-896350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:04:07.425169   54825 config.go:182] Loaded profile config "functional-896350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:04:07.432491   54825 cli_runner.go:164] Run: docker container inspect functional-896350 --format={{.State.Status}}
I1212 00:04:07.462448   54825 ssh_runner.go:195] Run: systemctl --version
I1212 00:04:07.462795   54825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-896350
I1212 00:04:07.489998   54825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/functional-896350/id_rsa Username:docker}
I1212 00:04:07.595069   54825 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-896350 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-896350
size: "4943877"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: ef0ece07f1c3a21b2646c5be74433ce93c486662bcb9e4bcc05338971395cdff
repoDigests:
- localhost/minikube-local-cache-test@sha256:88498cb2a94d39707097933ed9634dd42a737e4f439130f46165f26ac77900af
repoTags:
- localhost/minikube-local-cache-test:functional-896350
size: "3330"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-896350 image ls --format yaml --alsologtostderr:
I1212 00:04:05.671465   53910 out.go:360] Setting OutFile to fd 1 ...
I1212 00:04:05.671717   53910 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:04:05.671726   53910 out.go:374] Setting ErrFile to fd 2...
I1212 00:04:05.671730   53910 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:04:05.671945   53910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
I1212 00:04:05.672456   53910 config.go:182] Loaded profile config "functional-896350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:04:05.672566   53910 config.go:182] Loaded profile config "functional-896350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:04:05.672965   53910 cli_runner.go:164] Run: docker container inspect functional-896350 --format={{.State.Status}}
I1212 00:04:05.691686   53910 ssh_runner.go:195] Run: systemctl --version
I1212 00:04:05.691743   53910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-896350
I1212 00:04:05.711055   53910 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/functional-896350/id_rsa Username:docker}
I1212 00:04:05.806824   53910 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896350 ssh pgrep buildkitd: exit status 1 (268.116502ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image build -t localhost/my-image:functional-896350 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-896350 image build -t localhost/my-image:functional-896350 testdata/build --alsologtostderr: (2.146764748s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-896350 image build -t localhost/my-image:functional-896350 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1b20535a642
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-896350
--> 29767e3fa34
Successfully tagged localhost/my-image:functional-896350
29767e3fa343db0fc41690044b4186513a461ffb9c72fa977cfa44bf78a21679
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-896350 image build -t localhost/my-image:functional-896350 testdata/build --alsologtostderr:
I1212 00:04:06.173346   54095 out.go:360] Setting OutFile to fd 1 ...
I1212 00:04:06.173541   54095 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:04:06.173554   54095 out.go:374] Setting ErrFile to fd 2...
I1212 00:04:06.173561   54095 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:04:06.173828   54095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
I1212 00:04:06.174649   54095 config.go:182] Loaded profile config "functional-896350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:04:06.175584   54095 config.go:182] Loaded profile config "functional-896350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:04:06.176034   54095 cli_runner.go:164] Run: docker container inspect functional-896350 --format={{.State.Status}}
I1212 00:04:06.196777   54095 ssh_runner.go:195] Run: systemctl --version
I1212 00:04:06.196830   54095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-896350
I1212 00:04:06.216605   54095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/functional-896350/id_rsa Username:docker}
I1212 00:04:06.311705   54095 build_images.go:162] Building image from path: /tmp/build.861274050.tar
I1212 00:04:06.311785   54095 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 00:04:06.333357   54095 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.861274050.tar
I1212 00:04:06.337058   54095 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.861274050.tar: stat -c "%s %y" /var/lib/minikube/build/build.861274050.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.861274050.tar': No such file or directory
I1212 00:04:06.337103   54095 ssh_runner.go:362] scp /tmp/build.861274050.tar --> /var/lib/minikube/build/build.861274050.tar (3072 bytes)
I1212 00:04:06.391364   54095 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.861274050
I1212 00:04:06.410953   54095 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.861274050 -xf /var/lib/minikube/build/build.861274050.tar
I1212 00:04:06.431589   54095 crio.go:315] Building image: /var/lib/minikube/build/build.861274050
I1212 00:04:06.431661   54095 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-896350 /var/lib/minikube/build/build.861274050 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1212 00:04:08.231893   54095 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-896350 /var/lib/minikube/build/build.861274050 --cgroup-manager=cgroupfs: (1.80020676s)
I1212 00:04:08.231949   54095 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.861274050
I1212 00:04:08.240700   54095 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.861274050.tar
I1212 00:04:08.248418   54095 build_images.go:218] Built localhost/my-image:functional-896350 from /tmp/build.861274050.tar
I1212 00:04:08.248450   54095 build_images.go:134] succeeded building to: functional-896350
I1212 00:04:08.248457   54095 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-896350
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image load --daemon kicbase/echo-server:functional-896350 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-896350 image load --daemon kicbase/echo-server:functional-896350 --alsologtostderr: (1.20168712s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-896350 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-896350 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-896350 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 48473: os: process already finished
helpers_test.go:526: unable to kill pid 48200: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-896350 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-896350 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-896350 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [86ffdb68-9ab6-4053-9d21-0e058c36fde6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [86ffdb68-9ab6-4053-9d21-0e058c36fde6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.003509499s
I1212 00:03:55.618010   14503 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-896350
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image load --daemon kicbase/echo-server:functional-896350 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image ls
I1212 00:03:45.343844   14503 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image save kicbase/echo-server:functional-896350 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image rm kicbase/echo-server:functional-896350 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-896350
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 image save --daemon kicbase/echo-server:functional-896350 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-896350
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-896350 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.16.208 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-896350 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-896350 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-896350 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-h8gk2" [3a035df1-6df1-4b99-b5b5-8977817f51e7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-h8gk2" [3a035df1-6df1-4b99-b5b5-8977817f51e7] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003428538s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "320.152751ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "68.410185ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "368.029824ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "62.462646ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-896350 /tmp/TestFunctionalparallelMountCmdany-port4281885444/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765497838965032996" to /tmp/TestFunctionalparallelMountCmdany-port4281885444/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765497838965032996" to /tmp/TestFunctionalparallelMountCmdany-port4281885444/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765497838965032996" to /tmp/TestFunctionalparallelMountCmdany-port4281885444/001/test-1765497838965032996
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896350 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (290.593112ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:03:59.255888   14503 retry.go:31] will retry after 607.912514ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 00:03 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 00:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 00:03 test-1765497838965032996
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh cat /mount-9p/test-1765497838965032996
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-896350 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [e76519f3-9559-4a2d-b48c-4f7a2e341830] Pending
helpers_test.go:353: "busybox-mount" [e76519f3-9559-4a2d-b48c-4f7a2e341830] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [e76519f3-9559-4a2d-b48c-4f7a2e341830] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [e76519f3-9559-4a2d-b48c-4f7a2e341830] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003072672s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-896350 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896350 /tmp/TestFunctionalparallelMountCmdany-port4281885444/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 service list
E1212 00:04:03.310352   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-896350 service list: (1.724032637s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-896350 service list -o json: (1.807828714s)
functional_test.go:1504: Took "1.807921179s" to run "out/minikube-linux-amd64 -p functional-896350 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-896350 /tmp/TestFunctionalparallelMountCmdspecific-port564444492/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896350 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (343.437679ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:04:05.367747   14503 retry.go:31] will retry after 698.238936ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896350 /tmp/TestFunctionalparallelMountCmdspecific-port564444492/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896350 ssh "sudo umount -f /mount-9p": exit status 1 (293.253265ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-896350 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896350 /tmp/TestFunctionalparallelMountCmdspecific-port564444492/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30334
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 service hello-node --url --format={{.IP}}
2025/12/12 00:04:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-896350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3923834891/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-896350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3923834891/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-896350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3923834891/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896350 ssh "findmnt -T" /mount1: exit status 1 (420.633382ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:04:07.621047   14503 retry.go:31] will retry after 377.524651ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-896350 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3923834891/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3923834891/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3923834891/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-896350 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30334
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-896350
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-896350
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-896350
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22101-10975/.minikube/files/etc/test/nested/copy/14503/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (38.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-155345 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-155345 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (38.430001838s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (38.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (11.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1212 00:04:51.030787   14503 config.go:182] Loaded profile config "functional-155345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-155345 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-155345 --alsologtostderr -v=8: (11.321134497s)
functional_test.go:678: soft start took 11.321459221s for "functional-155345" cluster.
I1212 00:05:02.352245   14503 config.go:182] Loaded profile config "functional-155345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (11.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-155345 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-155345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3185350541/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 cache add minikube-local-cache-test:functional-155345
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 cache delete minikube-local-cache-test:functional-155345
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-155345
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-155345 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (274.089102ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 kubectl -- --context functional-155345 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-155345 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (44.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-155345 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 00:05:25.235085   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-155345 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.82774924s)
functional_test.go:776: restart took 44.827851899s for "functional-155345" cluster.
I1212 00:05:53.233109   14503 config.go:182] Loaded profile config "functional-155345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (44.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-155345 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-155345 logs: (1.125778131s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1309446739/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-155345 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1309446739/001/logs.txt: (1.137099163s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-155345 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-155345
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-155345: exit status 115 (327.165703ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30407 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-155345 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-155345 config get cpus: exit status 14 (83.752619ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-155345 config get cpus: exit status 14 (80.916278ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (5.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-155345 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-155345 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 72567: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (5.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-155345 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-155345 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (171.845617ms)

                                                
                                                
-- stdout --
	* [functional-155345] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:06:14.372685   71505 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:06:14.372981   71505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:06:14.372995   71505 out.go:374] Setting ErrFile to fd 2...
	I1212 00:06:14.373003   71505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:06:14.373316   71505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:06:14.373881   71505 out.go:368] Setting JSON to false
	I1212 00:06:14.375111   71505 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2920,"bootTime":1765495054,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:06:14.375175   71505 start.go:143] virtualization: kvm guest
	I1212 00:06:14.376893   71505 out.go:179] * [functional-155345] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:06:14.378663   71505 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:06:14.378692   71505 notify.go:221] Checking for updates...
	I1212 00:06:14.381243   71505 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:06:14.382524   71505 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:06:14.383561   71505 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:06:14.384553   71505 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:06:14.385810   71505 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:06:14.387491   71505 config.go:182] Loaded profile config "functional-155345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:06:14.388113   71505 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:06:14.412913   71505 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:06:14.413014   71505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:06:14.470254   71505 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-12 00:06:14.460521255 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:06:14.470361   71505 docker.go:319] overlay module found
	I1212 00:06:14.472069   71505 out.go:179] * Using the docker driver based on existing profile
	I1212 00:06:14.473120   71505 start.go:309] selected driver: docker
	I1212 00:06:14.473133   71505 start.go:927] validating driver "docker" against &{Name:functional-155345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-155345 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:06:14.473220   71505 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:06:14.474745   71505 out.go:203] 
	W1212 00:06:14.475672   71505 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 00:06:14.476549   71505 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-155345 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-155345 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-155345 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (184.39103ms)

                                                
                                                
-- stdout --
	* [functional-155345] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:06:06.824911   68290 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:06:06.825008   68290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:06:06.825017   68290 out.go:374] Setting ErrFile to fd 2...
	I1212 00:06:06.825022   68290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:06:06.825311   68290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:06:06.825724   68290 out.go:368] Setting JSON to false
	I1212 00:06:06.826976   68290 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2913,"bootTime":1765495054,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:06:06.827043   68290 start.go:143] virtualization: kvm guest
	I1212 00:06:06.829005   68290 out.go:179] * [functional-155345] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1212 00:06:06.830572   68290 notify.go:221] Checking for updates...
	I1212 00:06:06.830584   68290 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:06:06.831873   68290 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:06:06.833009   68290 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:06:06.834095   68290 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:06:06.835243   68290 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:06:06.836330   68290 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:06:06.838017   68290 config.go:182] Loaded profile config "functional-155345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:06:06.838741   68290 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:06:06.872743   68290 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:06:06.872902   68290 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:06:06.930841   68290 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-12 00:06:06.921838044 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:06:06.930946   68290 docker.go:319] overlay module found
	I1212 00:06:06.932832   68290 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1212 00:06:06.933930   68290 start.go:309] selected driver: docker
	I1212 00:06:06.933957   68290 start.go:927] validating driver "docker" against &{Name:functional-155345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-155345 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:06:06.934068   68290 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:06:06.935980   68290 out.go:203] 
	W1212 00:06:06.937074   68290 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 00:06:06.938063   68290 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (7.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-155345 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-155345 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-lfvm6" [b186ef4d-2a04-4679-b54d-46cd275ef817] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-lfvm6" [b186ef4d-2a04-4679-b54d-46cd275ef817] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004020073s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30150
functional_test.go:1680: http://192.168.49.2:30150: success! body:
Request served by hello-node-connect-9f67c86d4-lfvm6

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30150
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (7.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (21.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [1e29fdef-436c-446d-ba27-d8405eacfa81] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00341695s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-155345 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-155345 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-155345 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-155345 apply -f testdata/storage-provisioner/pod.yaml
I1212 00:06:06.489869   14503 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [f17aebdd-13aa-41d9-8a87-aa9ba8ae4cdc] Pending
helpers_test.go:353: "sp-pod" [f17aebdd-13aa-41d9-8a87-aa9ba8ae4cdc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [f17aebdd-13aa-41d9-8a87-aa9ba8ae4cdc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004223838s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-155345 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-155345 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-155345 delete -f testdata/storage-provisioner/pod.yaml: (1.047892488s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-155345 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [072bba2a-89b8-4994-85d1-4ee18fe3dbf1] Pending
helpers_test.go:353: "sp-pod" [072bba2a-89b8-4994-85d1-4ee18fe3dbf1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004783432s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-155345 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (21.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh -n functional-155345 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 cp functional-155345:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2911366165/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh -n functional-155345 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh -n functional-155345 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (23.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-155345 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-s7n4f" [538e4612-0403-4d54-adca-ff9d8b22cfac] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-s7n4f" [538e4612-0403-4d54-adca-ff9d8b22cfac] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 19.002757874s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-155345 exec mysql-7d7b65bc95-s7n4f -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-155345 exec mysql-7d7b65bc95-s7n4f -- mysql -ppassword -e "show databases;": exit status 1 (118.478229ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 00:06:35.974045   14503 retry.go:31] will retry after 992.939782ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-155345 exec mysql-7d7b65bc95-s7n4f -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-155345 exec mysql-7d7b65bc95-s7n4f -- mysql -ppassword -e "show databases;": exit status 1 (83.348516ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 00:06:37.050628   14503 retry.go:31] will retry after 1.401489564s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-155345 exec mysql-7d7b65bc95-s7n4f -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-155345 exec mysql-7d7b65bc95-s7n4f -- mysql -ppassword -e "show databases;": exit status 1 (82.135676ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 00:06:38.535460   14503 retry.go:31] will retry after 1.390972167s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-155345 exec mysql-7d7b65bc95-s7n4f -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (23.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/14503/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "sudo cat /etc/test/nested/copy/14503/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/14503.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "sudo cat /etc/ssl/certs/14503.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/14503.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "sudo cat /usr/share/ca-certificates/14503.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/145032.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "sudo cat /etc/ssl/certs/145032.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/145032.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "sudo cat /usr/share/ca-certificates/145032.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-155345 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-155345 ssh "sudo systemctl is-active docker": exit status 1 (312.305819ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-155345 ssh "sudo systemctl is-active containerd": exit status 1 (314.432835ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (7.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-155345 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-155345 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-7pjgl" [786767cb-295c-4604-b92b-9fec3298bbb0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-7pjgl" [786767cb-295c-4604-b92b-9fec3298bbb0] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003856755s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (7.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "344.109614ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "69.864084ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "378.534105ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "85.283801ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-155345 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-155345
localhost/kicbase/echo-server:functional-155345
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-155345 image ls --format short --alsologtostderr:
I1212 00:06:17.764304   73370 out.go:360] Setting OutFile to fd 1 ...
I1212 00:06:17.764406   73370 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:17.764418   73370 out.go:374] Setting ErrFile to fd 2...
I1212 00:06:17.764425   73370 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:17.764709   73370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
I1212 00:06:17.765386   73370 config.go:182] Loaded profile config "functional-155345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:06:17.765537   73370 config.go:182] Loaded profile config "functional-155345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:06:17.766129   73370 cli_runner.go:164] Run: docker container inspect functional-155345 --format={{.State.Status}}
I1212 00:06:17.789566   73370 ssh_runner.go:195] Run: systemctl --version
I1212 00:06:17.789620   73370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-155345
I1212 00:06:17.810431   73370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/functional-155345/id_rsa Username:docker}
I1212 00:06:17.913232   73370 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-155345 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-155345  │ ef0ece07f1c3a │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-155345  │ 9056ab77afb8e │ 4.95MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-155345 image ls --format table --alsologtostderr:
I1212 00:06:20.321465   73795 out.go:360] Setting OutFile to fd 1 ...
I1212 00:06:20.321779   73795 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:20.321792   73795 out.go:374] Setting ErrFile to fd 2...
I1212 00:06:20.321798   73795 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:20.322041   73795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
I1212 00:06:20.322734   73795 config.go:182] Loaded profile config "functional-155345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:06:20.322869   73795 config.go:182] Loaded profile config "functional-155345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:06:20.323429   73795 cli_runner.go:164] Run: docker container inspect functional-155345 --format={{.State.Status}}
I1212 00:06:20.343341   73795 ssh_runner.go:195] Run: systemctl --version
I1212 00:06:20.343392   73795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-155345
I1212 00:06:20.364678   73795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/functional-155345/id_rsa Username:docker}
I1212 00:06:20.460424   73795 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-155345 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"7bb6219ddab95bdabbe
f83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[
"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13
.1"],"size":"79193994"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd
073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"ef0ece07f1c3a21b2646c5be74433ce93c486662bcb9e4bcc05338971395cdff","repoDigests":["localhost/minikube-local-cache-test@sha256:88498cb2a94d39707097933ed9634dd42a737e4f439130f46165f26ac77900af"],"repoTags":["localhost/minikube-local-cache-test:functional-155345"],"size":"3330"},{"id":"9056ab77afb8e18e04303f11000a9
d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-155345"],"size":"4945146"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec5
7271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-155345 image ls --format json --alsologtostderr:
I1212 00:06:20.064934   73736 out.go:360] Setting OutFile to fd 1 ...
I1212 00:06:20.065229   73736 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:20.065239   73736 out.go:374] Setting ErrFile to fd 2...
I1212 00:06:20.065243   73736 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:20.065412   73736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
I1212 00:06:20.065945   73736 config.go:182] Loaded profile config "functional-155345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:06:20.066027   73736 config.go:182] Loaded profile config "functional-155345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:06:20.066386   73736 cli_runner.go:164] Run: docker container inspect functional-155345 --format={{.State.Status}}
I1212 00:06:20.085578   73736 ssh_runner.go:195] Run: systemctl --version
I1212 00:06:20.085635   73736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-155345
I1212 00:06:20.106496   73736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/functional-155345/id_rsa Username:docker}
I1212 00:06:20.208647   73736 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-155345 image ls --format yaml --alsologtostderr: (1.318683101s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-155345 image ls --format yaml --alsologtostderr:
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-155345
size: "4945146"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ef0ece07f1c3a21b2646c5be74433ce93c486662bcb9e4bcc05338971395cdff
repoDigests:
- localhost/minikube-local-cache-test@sha256:88498cb2a94d39707097933ed9634dd42a737e4f439130f46165f26ac77900af
repoTags:
- localhost/minikube-local-cache-test:functional-155345
size: "3330"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-155345 image ls --format yaml --alsologtostderr:
I1212 00:06:18.025267   73425 out.go:360] Setting OutFile to fd 1 ...
I1212 00:06:18.025365   73425 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:18.025375   73425 out.go:374] Setting ErrFile to fd 2...
I1212 00:06:18.025381   73425 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:18.025692   73425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
I1212 00:06:18.026393   73425 config.go:182] Loaded profile config "functional-155345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:06:18.026534   73425 config.go:182] Loaded profile config "functional-155345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:06:18.027173   73425 cli_runner.go:164] Run: docker container inspect functional-155345 --format={{.State.Status}}
I1212 00:06:18.048226   73425 ssh_runner.go:195] Run: systemctl --version
I1212 00:06:18.048282   73425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-155345
I1212 00:06:18.069278   73425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/functional-155345/id_rsa Username:docker}
I1212 00:06:18.170063   73425 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 00:06:19.266034   73425 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.095935855s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-155345 ssh pgrep buildkitd: exit status 1 (264.884669ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image build -t localhost/my-image:functional-155345 testdata/build --alsologtostderr
2025/12/12 00:06:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-155345 image build -t localhost/my-image:functional-155345 testdata/build --alsologtostderr: (1.77951548s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-155345 image build -t localhost/my-image:functional-155345 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5b4dad19ada
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-155345
--> 60c9eec5bf5
Successfully tagged localhost/my-image:functional-155345
60c9eec5bf5e04a290a751d16475b51f869c5b9bf1d6c8d009433ed3a52bf05b
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-155345 image build -t localhost/my-image:functional-155345 testdata/build --alsologtostderr:
I1212 00:06:19.593182   73656 out.go:360] Setting OutFile to fd 1 ...
I1212 00:06:19.593457   73656 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:19.593468   73656 out.go:374] Setting ErrFile to fd 2...
I1212 00:06:19.593485   73656 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:19.593676   73656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
I1212 00:06:19.594225   73656 config.go:182] Loaded profile config "functional-155345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:06:19.594822   73656 config.go:182] Loaded profile config "functional-155345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:06:19.595275   73656 cli_runner.go:164] Run: docker container inspect functional-155345 --format={{.State.Status}}
I1212 00:06:19.612705   73656 ssh_runner.go:195] Run: systemctl --version
I1212 00:06:19.612745   73656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-155345
I1212 00:06:19.629372   73656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/functional-155345/id_rsa Username:docker}
I1212 00:06:19.724780   73656 build_images.go:162] Building image from path: /tmp/build.1100043246.tar
I1212 00:06:19.724858   73656 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 00:06:19.732386   73656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1100043246.tar
I1212 00:06:19.735821   73656 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1100043246.tar: stat -c "%s %y" /var/lib/minikube/build/build.1100043246.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1100043246.tar': No such file or directory
I1212 00:06:19.735848   73656 ssh_runner.go:362] scp /tmp/build.1100043246.tar --> /var/lib/minikube/build/build.1100043246.tar (3072 bytes)
I1212 00:06:19.752582   73656 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1100043246
I1212 00:06:19.759886   73656 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1100043246 -xf /var/lib/minikube/build/build.1100043246.tar
I1212 00:06:19.767340   73656 crio.go:315] Building image: /var/lib/minikube/build/build.1100043246
I1212 00:06:19.767409   73656 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-155345 /var/lib/minikube/build/build.1100043246 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1212 00:06:21.296266   73656 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-155345 /var/lib/minikube/build/build.1100043246 --cgroup-manager=cgroupfs: (1.528829529s)
I1212 00:06:21.296328   73656 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1100043246
I1212 00:06:21.304397   73656 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1100043246.tar
I1212 00:06:21.311527   73656 build_images.go:218] Built localhost/my-image:functional-155345 from /tmp/build.1100043246.tar
I1212 00:06:21.311558   73656 build_images.go:134] succeeded building to: functional-155345
I1212 00:06:21.311565   73656 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-155345
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image load --daemon kicbase/echo-server:functional-155345 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-155345 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-155345 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-155345 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 67100: os: process already finished
helpers_test.go:520: unable to terminate pid 66869: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-155345 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-155345 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (10.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-155345 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [092207aa-1ef8-468a-80ed-48be81079207] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [092207aa-1ef8-468a-80ed-48be81079207] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003074006s
I1212 00:06:12.496144   14503 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (10.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image load --daemon kicbase/echo-server:functional-155345 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-155345
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image load --daemon kicbase/echo-server:functional-155345 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image save kicbase/echo-server:functional-155345 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image rm kicbase/echo-server:functional-155345 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-155345
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 image save --daemon kicbase/echo-server:functional-155345 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-155345
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (5.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-155345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1177490064/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765497966946577338" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1177490064/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765497966946577338" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1177490064/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765497966946577338" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1177490064/001/test-1765497966946577338
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-155345 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (286.126387ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:06:07.233040   14503 retry.go:31] will retry after 532.841151ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 00:06 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 00:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 00:06 test-1765497966946577338
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh cat /mount-9p/test-1765497966946577338
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-155345 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [09f021f8-6622-4cfb-8ef6-917de8d807b4] Pending
helpers_test.go:353: "busybox-mount" [09f021f8-6622-4cfb-8ef6-917de8d807b4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [09f021f8-6622-4cfb-8ef6-917de8d807b4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [09f021f8-6622-4cfb-8ef6-917de8d807b4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.002395223s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-155345 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-155345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1177490064/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (5.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 service list -o json
functional_test.go:1504: Took "312.893618ms" to run "out/minikube-linux-amd64 -p functional-155345 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31005
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31005
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-155345 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.105.221 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-155345 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-155345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2075587780/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-155345 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (296.401453ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:06:12.979694   14503 retry.go:31] will retry after 616.217046ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-155345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2075587780/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-155345 ssh "sudo umount -f /mount-9p": exit status 1 (287.438529ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-155345 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-155345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2075587780/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-155345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1884034045/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-155345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1884034045/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-155345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1884034045/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-155345 ssh "findmnt -T" /mount1: exit status 1 (370.547332ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:06:15.033279   14503 retry.go:31] will retry after 452.846389ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "findmnt -T" /mount2
I1212 00:06:15.795651   14503 detect.go:223] nested VM detected
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-155345 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-155345 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-155345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1884034045/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-155345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1884034045/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-155345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1884034045/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-155345
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-155345
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-155345
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (133.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1212 00:07:41.374629   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:08:09.077027   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:08:36.948755   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:08:36.955175   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:08:36.966509   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:08:36.987807   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:08:37.029154   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:08:37.110564   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:08:37.272076   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:08:37.593735   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:08:38.235748   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:08:39.517592   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:08:42.078911   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:08:47.200798   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-263555 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m12.803694699s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (133.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- rollout status deployment/busybox
E1212 00:08:57.442171   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-263555 kubectl -- rollout status deployment/busybox: (1.508432897s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- exec busybox-7b57f96db7-8snjb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- exec busybox-7b57f96db7-bwcnr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- exec busybox-7b57f96db7-fv6nb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- exec busybox-7b57f96db7-8snjb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- exec busybox-7b57f96db7-bwcnr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- exec busybox-7b57f96db7-fv6nb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- exec busybox-7b57f96db7-8snjb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- exec busybox-7b57f96db7-bwcnr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- exec busybox-7b57f96db7-fv6nb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- exec busybox-7b57f96db7-8snjb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- exec busybox-7b57f96db7-8snjb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- exec busybox-7b57f96db7-bwcnr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- exec busybox-7b57f96db7-bwcnr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- exec busybox-7b57f96db7-fv6nb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 kubectl -- exec busybox-7b57f96db7-fv6nb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 node add --alsologtostderr -v 5
E1212 00:09:17.923803   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-263555 node add --alsologtostderr -v 5: (22.48206189s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-263555 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp testdata/cp-test.txt ha-263555:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp ha-263555:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1300479045/001/cp-test_ha-263555.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp ha-263555:/home/docker/cp-test.txt ha-263555-m02:/home/docker/cp-test_ha-263555_ha-263555-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m02 "sudo cat /home/docker/cp-test_ha-263555_ha-263555-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp ha-263555:/home/docker/cp-test.txt ha-263555-m03:/home/docker/cp-test_ha-263555_ha-263555-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m03 "sudo cat /home/docker/cp-test_ha-263555_ha-263555-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp ha-263555:/home/docker/cp-test.txt ha-263555-m04:/home/docker/cp-test_ha-263555_ha-263555-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m04 "sudo cat /home/docker/cp-test_ha-263555_ha-263555-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp testdata/cp-test.txt ha-263555-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp ha-263555-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1300479045/001/cp-test_ha-263555-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp ha-263555-m02:/home/docker/cp-test.txt ha-263555:/home/docker/cp-test_ha-263555-m02_ha-263555.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555 "sudo cat /home/docker/cp-test_ha-263555-m02_ha-263555.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp ha-263555-m02:/home/docker/cp-test.txt ha-263555-m03:/home/docker/cp-test_ha-263555-m02_ha-263555-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m03 "sudo cat /home/docker/cp-test_ha-263555-m02_ha-263555-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp ha-263555-m02:/home/docker/cp-test.txt ha-263555-m04:/home/docker/cp-test_ha-263555-m02_ha-263555-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m04 "sudo cat /home/docker/cp-test_ha-263555-m02_ha-263555-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp testdata/cp-test.txt ha-263555-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp ha-263555-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1300479045/001/cp-test_ha-263555-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp ha-263555-m03:/home/docker/cp-test.txt ha-263555:/home/docker/cp-test_ha-263555-m03_ha-263555.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555 "sudo cat /home/docker/cp-test_ha-263555-m03_ha-263555.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp ha-263555-m03:/home/docker/cp-test.txt ha-263555-m02:/home/docker/cp-test_ha-263555-m03_ha-263555-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m02 "sudo cat /home/docker/cp-test_ha-263555-m03_ha-263555-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp ha-263555-m03:/home/docker/cp-test.txt ha-263555-m04:/home/docker/cp-test_ha-263555-m03_ha-263555-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m04 "sudo cat /home/docker/cp-test_ha-263555-m03_ha-263555-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp testdata/cp-test.txt ha-263555-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp ha-263555-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1300479045/001/cp-test_ha-263555-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp ha-263555-m04:/home/docker/cp-test.txt ha-263555:/home/docker/cp-test_ha-263555-m04_ha-263555.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555 "sudo cat /home/docker/cp-test_ha-263555-m04_ha-263555.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp ha-263555-m04:/home/docker/cp-test.txt ha-263555-m02:/home/docker/cp-test_ha-263555-m04_ha-263555-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m02 "sudo cat /home/docker/cp-test_ha-263555-m04_ha-263555-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 cp ha-263555-m04:/home/docker/cp-test.txt ha-263555-m03:/home/docker/cp-test_ha-263555-m04_ha-263555-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 ssh -n ha-263555-m03 "sudo cat /home/docker/cp-test_ha-263555-m04_ha-263555-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-263555 node stop m02 --alsologtostderr -v 5: (13.542733611s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-263555 status --alsologtostderr -v 5: exit status 7 (655.44837ms)

                                                
                                                
-- stdout --
	ha-263555
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-263555-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-263555-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-263555-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:09:56.133429   94478 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:09:56.133539   94478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:09:56.133549   94478 out.go:374] Setting ErrFile to fd 2...
	I1212 00:09:56.133553   94478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:09:56.133747   94478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:09:56.133916   94478 out.go:368] Setting JSON to false
	I1212 00:09:56.133947   94478 mustload.go:66] Loading cluster: ha-263555
	I1212 00:09:56.134097   94478 notify.go:221] Checking for updates...
	I1212 00:09:56.134434   94478 config.go:182] Loaded profile config "ha-263555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:09:56.134451   94478 status.go:174] checking status of ha-263555 ...
	I1212 00:09:56.134949   94478 cli_runner.go:164] Run: docker container inspect ha-263555 --format={{.State.Status}}
	I1212 00:09:56.152467   94478 status.go:371] ha-263555 host status = "Running" (err=<nil>)
	I1212 00:09:56.152501   94478 host.go:66] Checking if "ha-263555" exists ...
	I1212 00:09:56.152733   94478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-263555
	I1212 00:09:56.169294   94478 host.go:66] Checking if "ha-263555" exists ...
	I1212 00:09:56.169562   94478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:09:56.169623   94478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-263555
	I1212 00:09:56.186097   94478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/ha-263555/id_rsa Username:docker}
	I1212 00:09:56.278794   94478 ssh_runner.go:195] Run: systemctl --version
	I1212 00:09:56.284803   94478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:09:56.296739   94478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:09:56.351154   94478 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-12 00:09:56.341793654 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:09:56.351671   94478 kubeconfig.go:125] found "ha-263555" server: "https://192.168.49.254:8443"
	I1212 00:09:56.351701   94478 api_server.go:166] Checking apiserver status ...
	I1212 00:09:56.351736   94478 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:09:56.362859   94478 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1272/cgroup
	W1212 00:09:56.371004   94478 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1272/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:09:56.371044   94478 ssh_runner.go:195] Run: ls
	I1212 00:09:56.374529   94478 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1212 00:09:56.378597   94478 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1212 00:09:56.378619   94478 status.go:463] ha-263555 apiserver status = Running (err=<nil>)
	I1212 00:09:56.378630   94478 status.go:176] ha-263555 status: &{Name:ha-263555 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:09:56.378648   94478 status.go:174] checking status of ha-263555-m02 ...
	I1212 00:09:56.378885   94478 cli_runner.go:164] Run: docker container inspect ha-263555-m02 --format={{.State.Status}}
	I1212 00:09:56.395695   94478 status.go:371] ha-263555-m02 host status = "Stopped" (err=<nil>)
	I1212 00:09:56.395710   94478 status.go:384] host is not running, skipping remaining checks
	I1212 00:09:56.395716   94478 status.go:176] ha-263555-m02 status: &{Name:ha-263555-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:09:56.395729   94478 status.go:174] checking status of ha-263555-m03 ...
	I1212 00:09:56.395965   94478 cli_runner.go:164] Run: docker container inspect ha-263555-m03 --format={{.State.Status}}
	I1212 00:09:56.412353   94478 status.go:371] ha-263555-m03 host status = "Running" (err=<nil>)
	I1212 00:09:56.412370   94478 host.go:66] Checking if "ha-263555-m03" exists ...
	I1212 00:09:56.412642   94478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-263555-m03
	I1212 00:09:56.428983   94478 host.go:66] Checking if "ha-263555-m03" exists ...
	I1212 00:09:56.429197   94478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:09:56.429228   94478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-263555-m03
	I1212 00:09:56.445152   94478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/ha-263555-m03/id_rsa Username:docker}
	I1212 00:09:56.536194   94478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:09:56.548267   94478 kubeconfig.go:125] found "ha-263555" server: "https://192.168.49.254:8443"
	I1212 00:09:56.548290   94478 api_server.go:166] Checking apiserver status ...
	I1212 00:09:56.548320   94478 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:09:56.558490   94478 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1185/cgroup
	W1212 00:09:56.566392   94478 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1185/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:09:56.566464   94478 ssh_runner.go:195] Run: ls
	I1212 00:09:56.570143   94478 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1212 00:09:56.573955   94478 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1212 00:09:56.573972   94478 status.go:463] ha-263555-m03 apiserver status = Running (err=<nil>)
	I1212 00:09:56.573979   94478 status.go:176] ha-263555-m03 status: &{Name:ha-263555-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:09:56.573993   94478 status.go:174] checking status of ha-263555-m04 ...
	I1212 00:09:56.574202   94478 cli_runner.go:164] Run: docker container inspect ha-263555-m04 --format={{.State.Status}}
	I1212 00:09:56.591272   94478 status.go:371] ha-263555-m04 host status = "Running" (err=<nil>)
	I1212 00:09:56.591291   94478 host.go:66] Checking if "ha-263555-m04" exists ...
	I1212 00:09:56.591523   94478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-263555-m04
	I1212 00:09:56.608229   94478 host.go:66] Checking if "ha-263555-m04" exists ...
	I1212 00:09:56.608433   94478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:09:56.608471   94478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-263555-m04
	I1212 00:09:56.627130   94478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/ha-263555-m04/id_rsa Username:docker}
	I1212 00:09:56.718892   94478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:09:56.730120   94478 status.go:176] ha-263555-m04 status: &{Name:ha-263555-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 node start m02 --alsologtostderr -v 5
E1212 00:09:58.885791   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-263555 node start m02 --alsologtostderr -v 5: (7.469422993s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (108.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-263555 stop --alsologtostderr -v 5: (50.081647547s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 start --wait true --alsologtostderr -v 5
E1212 00:10:59.761177   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:59.767559   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:59.778897   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:59.800232   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:59.841555   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:59.922965   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:00.084448   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:00.406107   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:01.047709   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:02.329563   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:04.890924   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:10.012440   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:20.254616   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:20.807606   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:40.736679   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-263555 start --wait true --alsologtostderr -v 5: (58.622283806s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (108.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-263555 node delete m03 --alsologtostderr -v 5: (9.656364374s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (42.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 stop --alsologtostderr -v 5
E1212 00:12:21.698274   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:12:41.374141   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-263555 stop --alsologtostderr -v 5: (42.83661421s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-263555 status --alsologtostderr -v 5: exit status 7 (113.849576ms)

                                                
                                                
-- stdout --
	ha-263555
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-263555-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-263555-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:12:49.453955  108760 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:12:49.454086  108760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:12:49.454097  108760 out.go:374] Setting ErrFile to fd 2...
	I1212 00:12:49.454104  108760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:12:49.454287  108760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:12:49.454526  108760 out.go:368] Setting JSON to false
	I1212 00:12:49.454554  108760 mustload.go:66] Loading cluster: ha-263555
	I1212 00:12:49.454596  108760 notify.go:221] Checking for updates...
	I1212 00:12:49.454907  108760 config.go:182] Loaded profile config "ha-263555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:12:49.454923  108760 status.go:174] checking status of ha-263555 ...
	I1212 00:12:49.455379  108760 cli_runner.go:164] Run: docker container inspect ha-263555 --format={{.State.Status}}
	I1212 00:12:49.476089  108760 status.go:371] ha-263555 host status = "Stopped" (err=<nil>)
	I1212 00:12:49.476130  108760 status.go:384] host is not running, skipping remaining checks
	I1212 00:12:49.476143  108760 status.go:176] ha-263555 status: &{Name:ha-263555 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:12:49.476186  108760 status.go:174] checking status of ha-263555-m02 ...
	I1212 00:12:49.476443  108760 cli_runner.go:164] Run: docker container inspect ha-263555-m02 --format={{.State.Status}}
	I1212 00:12:49.493447  108760 status.go:371] ha-263555-m02 host status = "Stopped" (err=<nil>)
	I1212 00:12:49.493463  108760 status.go:384] host is not running, skipping remaining checks
	I1212 00:12:49.493469  108760 status.go:176] ha-263555-m02 status: &{Name:ha-263555-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:12:49.493502  108760 status.go:174] checking status of ha-263555-m04 ...
	I1212 00:12:49.493707  108760 cli_runner.go:164] Run: docker container inspect ha-263555-m04 --format={{.State.Status}}
	I1212 00:12:49.509970  108760 status.go:371] ha-263555-m04 host status = "Stopped" (err=<nil>)
	I1212 00:12:49.509987  108760 status.go:384] host is not running, skipping remaining checks
	I1212 00:12:49.509994  108760 status.go:176] ha-263555-m04 status: &{Name:ha-263555-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (42.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (53.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1212 00:13:36.948914   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-263555 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (52.828982176s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (53.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1212 00:13:43.620190   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (68.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 node add --control-plane --alsologtostderr -v 5
E1212 00:14:04.651506   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-263555 node add --control-plane --alsologtostderr -v 5: (1m8.118423211s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-263555 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (68.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (34.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-286508 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-286508 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (34.62640008s)
--- PASS: TestJSONOutput/start/Command (34.63s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.26s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-286508 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-286508 --output=json --user=testUser: (6.264519026s)
--- PASS: TestJSONOutput/stop/Command (6.26s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-554049 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-554049 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (72.832006ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8bf601c1-1dee-4aaf-b136-cd2fa4aeb5ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-554049] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"15ba4377-20d6-4d61-a607-c441f89c8f85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22101"}}
	{"specversion":"1.0","id":"5e05fdc7-30e5-440f-82cc-c1e82afa63f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8bfa0cc2-3d8c-42d2-812c-3104cd66b244","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig"}}
	{"specversion":"1.0","id":"aa6a4e53-2e68-436c-a861-955e71d852b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube"}}
	{"specversion":"1.0","id":"77c36a16-20bc-4514-86a3-073d62c6e84f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fcfcba34-8b0d-4479-974a-0522f1b7ecbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8853dcaf-741c-43ea-9077-cc6141947a0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-554049" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-554049
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.78s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-829332 --network=
E1212 00:15:59.761436   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-829332 --network=: (26.668709283s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-829332" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-829332
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-829332: (2.095118409s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.78s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-570461 --network=bridge
E1212 00:16:27.461626   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-570461 --network=bridge: (19.565480684s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-570461" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-570461
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-570461: (1.966287907s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.55s)

                                                
                                    
x
+
TestKicExistingNetwork (25.9s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1212 00:16:41.014743   14503 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1212 00:16:41.030722   14503 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1212 00:16:41.030791   14503 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1212 00:16:41.030818   14503 cli_runner.go:164] Run: docker network inspect existing-network
W1212 00:16:41.045762   14503 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1212 00:16:41.045786   14503 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1212 00:16:41.045806   14503 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1212 00:16:41.045923   14503 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 00:16:41.062073   14503 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ad72354fdcf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:1e:a7:00:22:62} reservation:<nil>}
I1212 00:16:41.062547   14503 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000682f90}
I1212 00:16:41.062582   14503 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1212 00:16:41.062633   14503 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1212 00:16:41.107227   14503 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-248805 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-248805 --network=existing-network: (23.780915226s)
helpers_test.go:176: Cleaning up "existing-network-248805" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-248805
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-248805: (1.991453412s)
I1212 00:17:06.896291   14503 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.90s)

                                                
                                    
x
+
TestKicCustomSubnet (22.34s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-637015 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-637015 --subnet=192.168.60.0/24: (20.240766534s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-637015 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-637015" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-637015
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-637015: (2.08160565s)
--- PASS: TestKicCustomSubnet (22.34s)

                                                
                                    
x
+
TestKicStaticIP (26.55s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-399604 --static-ip=192.168.200.200
E1212 00:17:41.373623   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-399604 --static-ip=192.168.200.200: (24.306433281s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-399604 ip
helpers_test.go:176: Cleaning up "static-ip-399604" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-399604
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-399604: (2.102428953s)
--- PASS: TestKicStaticIP (26.55s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (43.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-676411 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-676411 --driver=docker  --container-runtime=crio: (18.744312371s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-678514 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-678514 --driver=docker  --container-runtime=crio: (19.373692395s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-676411
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-678514
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-678514" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-678514
E1212 00:18:36.949613   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-678514: (2.326155404s)
helpers_test.go:176: Cleaning up "first-676411" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-676411
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-676411: (2.336058641s)
--- PASS: TestMinikubeProfile (43.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-910493 --memory=3072 --mount-string /tmp/TestMountStartserial3173203392/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-910493 --memory=3072 --mount-string /tmp/TestMountStartserial3173203392/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.599270284s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-910493 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-923503 --memory=3072 --mount-string /tmp/TestMountStartserial3173203392/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-923503 --memory=3072 --mount-string /tmp/TestMountStartserial3173203392/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.673888116s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-923503 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-910493 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-910493 --alsologtostderr -v=5: (1.645748454s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-923503 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-923503
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-923503: (1.249124635s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.52s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-923503
E1212 00:19:04.440609   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-923503: (6.520456886s)
--- PASS: TestMountStart/serial/RestartStopped (7.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-923503 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (89.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-626592 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-626592 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m28.668631737s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (89.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626592 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626592 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-626592 -- rollout status deployment/busybox: (1.059678768s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626592 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626592 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626592 -- exec busybox-7b57f96db7-q7hgs -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626592 -- exec busybox-7b57f96db7-tqh6f -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626592 -- exec busybox-7b57f96db7-q7hgs -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626592 -- exec busybox-7b57f96db7-tqh6f -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626592 -- exec busybox-7b57f96db7-q7hgs -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626592 -- exec busybox-7b57f96db7-tqh6f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626592 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626592 -- exec busybox-7b57f96db7-q7hgs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626592 -- exec busybox-7b57f96db7-q7hgs -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626592 -- exec busybox-7b57f96db7-tqh6f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626592 -- exec busybox-7b57f96db7-tqh6f -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (22.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-626592 -v=5 --alsologtostderr
E1212 00:20:59.759464   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-626592 -v=5 --alsologtostderr: (22.21120139s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (22.84s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-626592 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 cp testdata/cp-test.txt multinode-626592:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 cp multinode-626592:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile458918030/001/cp-test_multinode-626592.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 cp multinode-626592:/home/docker/cp-test.txt multinode-626592-m02:/home/docker/cp-test_multinode-626592_multinode-626592-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592-m02 "sudo cat /home/docker/cp-test_multinode-626592_multinode-626592-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 cp multinode-626592:/home/docker/cp-test.txt multinode-626592-m03:/home/docker/cp-test_multinode-626592_multinode-626592-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592-m03 "sudo cat /home/docker/cp-test_multinode-626592_multinode-626592-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 cp testdata/cp-test.txt multinode-626592-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 cp multinode-626592-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile458918030/001/cp-test_multinode-626592-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 cp multinode-626592-m02:/home/docker/cp-test.txt multinode-626592:/home/docker/cp-test_multinode-626592-m02_multinode-626592.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592 "sudo cat /home/docker/cp-test_multinode-626592-m02_multinode-626592.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 cp multinode-626592-m02:/home/docker/cp-test.txt multinode-626592-m03:/home/docker/cp-test_multinode-626592-m02_multinode-626592-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592-m03 "sudo cat /home/docker/cp-test_multinode-626592-m02_multinode-626592-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 cp testdata/cp-test.txt multinode-626592-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 cp multinode-626592-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile458918030/001/cp-test_multinode-626592-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 cp multinode-626592-m03:/home/docker/cp-test.txt multinode-626592:/home/docker/cp-test_multinode-626592-m03_multinode-626592.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592 "sudo cat /home/docker/cp-test_multinode-626592-m03_multinode-626592.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 cp multinode-626592-m03:/home/docker/cp-test.txt multinode-626592-m02:/home/docker/cp-test_multinode-626592-m03_multinode-626592-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 ssh -n multinode-626592-m02 "sudo cat /home/docker/cp-test_multinode-626592-m03_multinode-626592-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-626592 node stop m03: (1.255239392s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-626592 status: exit status 7 (473.269507ms)

                                                
                                                
-- stdout --
	multinode-626592
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-626592-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-626592-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-626592 status --alsologtostderr: exit status 7 (480.057047ms)

                                                
                                                
-- stdout --
	multinode-626592
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-626592-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-626592-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:21:15.352745  169096 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:21:15.352980  169096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:21:15.352988  169096 out.go:374] Setting ErrFile to fd 2...
	I1212 00:21:15.352992  169096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:21:15.353170  169096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:21:15.353324  169096 out.go:368] Setting JSON to false
	I1212 00:21:15.353348  169096 mustload.go:66] Loading cluster: multinode-626592
	I1212 00:21:15.353407  169096 notify.go:221] Checking for updates...
	I1212 00:21:15.353682  169096 config.go:182] Loaded profile config "multinode-626592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:21:15.353696  169096 status.go:174] checking status of multinode-626592 ...
	I1212 00:21:15.354087  169096 cli_runner.go:164] Run: docker container inspect multinode-626592 --format={{.State.Status}}
	I1212 00:21:15.372186  169096 status.go:371] multinode-626592 host status = "Running" (err=<nil>)
	I1212 00:21:15.372203  169096 host.go:66] Checking if "multinode-626592" exists ...
	I1212 00:21:15.372430  169096 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-626592
	I1212 00:21:15.389103  169096 host.go:66] Checking if "multinode-626592" exists ...
	I1212 00:21:15.389441  169096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:21:15.389517  169096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-626592
	I1212 00:21:15.406918  169096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/multinode-626592/id_rsa Username:docker}
	I1212 00:21:15.499229  169096 ssh_runner.go:195] Run: systemctl --version
	I1212 00:21:15.505426  169096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:21:15.516843  169096 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:21:15.572754  169096 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-12 00:21:15.562030858 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:21:15.573422  169096 kubeconfig.go:125] found "multinode-626592" server: "https://192.168.67.2:8443"
	I1212 00:21:15.573449  169096 api_server.go:166] Checking apiserver status ...
	I1212 00:21:15.573513  169096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:15.584334  169096 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1235/cgroup
	W1212 00:21:15.592317  169096 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1235/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:15.592358  169096 ssh_runner.go:195] Run: ls
	I1212 00:21:15.595885  169096 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1212 00:21:15.600547  169096 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1212 00:21:15.600567  169096 status.go:463] multinode-626592 apiserver status = Running (err=<nil>)
	I1212 00:21:15.600579  169096 status.go:176] multinode-626592 status: &{Name:multinode-626592 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:21:15.600601  169096 status.go:174] checking status of multinode-626592-m02 ...
	I1212 00:21:15.600893  169096 cli_runner.go:164] Run: docker container inspect multinode-626592-m02 --format={{.State.Status}}
	I1212 00:21:15.620366  169096 status.go:371] multinode-626592-m02 host status = "Running" (err=<nil>)
	I1212 00:21:15.620385  169096 host.go:66] Checking if "multinode-626592-m02" exists ...
	I1212 00:21:15.620671  169096 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-626592-m02
	I1212 00:21:15.636797  169096 host.go:66] Checking if "multinode-626592-m02" exists ...
	I1212 00:21:15.637061  169096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:21:15.637096  169096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-626592-m02
	I1212 00:21:15.653498  169096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22101-10975/.minikube/machines/multinode-626592-m02/id_rsa Username:docker}
	I1212 00:21:15.743995  169096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:21:15.755686  169096 status.go:176] multinode-626592-m02 status: &{Name:multinode-626592-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:21:15.755712  169096 status.go:174] checking status of multinode-626592-m03 ...
	I1212 00:21:15.755929  169096 cli_runner.go:164] Run: docker container inspect multinode-626592-m03 --format={{.State.Status}}
	I1212 00:21:15.772903  169096 status.go:371] multinode-626592-m03 host status = "Stopped" (err=<nil>)
	I1212 00:21:15.772920  169096 status.go:384] host is not running, skipping remaining checks
	I1212 00:21:15.772926  169096 status.go:176] multinode-626592-m03 status: &{Name:multinode-626592-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-626592 node start m03 -v=5 --alsologtostderr: (6.283563137s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (57.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-626592
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-626592
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-626592: (31.24093413s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-626592 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-626592 --wait=true -v=5 --alsologtostderr: (25.873444178s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-626592
--- PASS: TestMultiNode/serial/RestartKeepsNodes (57.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-626592 node delete m03: (4.357557461s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (17.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 stop
E1212 00:22:41.374236   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-626592 stop: (17.335581163s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-626592 status: exit status 7 (93.970772ms)

                                                
                                                
-- stdout --
	multinode-626592
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-626592-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-626592 status --alsologtostderr: exit status 7 (96.437022ms)

                                                
                                                
-- stdout --
	multinode-626592
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-626592-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:22:42.375639  177970 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:22:42.375738  177970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:22:42.375746  177970 out.go:374] Setting ErrFile to fd 2...
	I1212 00:22:42.375750  177970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:22:42.375941  177970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:22:42.376088  177970 out.go:368] Setting JSON to false
	I1212 00:22:42.376110  177970 mustload.go:66] Loading cluster: multinode-626592
	I1212 00:22:42.376215  177970 notify.go:221] Checking for updates...
	I1212 00:22:42.376411  177970 config.go:182] Loaded profile config "multinode-626592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:22:42.376422  177970 status.go:174] checking status of multinode-626592 ...
	I1212 00:22:42.376840  177970 cli_runner.go:164] Run: docker container inspect multinode-626592 --format={{.State.Status}}
	I1212 00:22:42.397175  177970 status.go:371] multinode-626592 host status = "Stopped" (err=<nil>)
	I1212 00:22:42.397194  177970 status.go:384] host is not running, skipping remaining checks
	I1212 00:22:42.397201  177970 status.go:176] multinode-626592 status: &{Name:multinode-626592 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:22:42.397223  177970 status.go:174] checking status of multinode-626592-m02 ...
	I1212 00:22:42.397502  177970 cli_runner.go:164] Run: docker container inspect multinode-626592-m02 --format={{.State.Status}}
	I1212 00:22:42.414530  177970 status.go:371] multinode-626592-m02 host status = "Stopped" (err=<nil>)
	I1212 00:22:42.414548  177970 status.go:384] host is not running, skipping remaining checks
	I1212 00:22:42.414553  177970 status.go:176] multinode-626592-m02 status: &{Name:multinode-626592-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (17.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-626592 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-626592 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (44.357986893s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626592 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.93s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-626592
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-626592-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-626592-m02 --driver=docker  --container-runtime=crio: exit status 14 (69.523945ms)

                                                
                                                
-- stdout --
	* [multinode-626592-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-626592-m02' is duplicated with machine name 'multinode-626592-m02' in profile 'multinode-626592'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-626592-m03 --driver=docker  --container-runtime=crio
E1212 00:23:36.949644   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-626592-m03 --driver=docker  --container-runtime=crio: (19.343446698s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-626592
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-626592: exit status 80 (278.249956ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-626592 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-626592-m03 already exists in multinode-626592-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-626592-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-626592-m03: (2.306168393s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.05s)

                                                
                                    
x
+
TestPreload (110.43s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-326141 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-326141 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (49.600099619s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-326141 image pull gcr.io/k8s-minikube/busybox
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-326141
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-326141: (6.225882996s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-326141 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1212 00:25:00.013249   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-326141 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (51.213042245s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-326141 image list
helpers_test.go:176: Cleaning up "test-preload-326141" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-326141
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-326141: (2.349996145s)
--- PASS: TestPreload (110.43s)

                                                
                                    
x
+
TestScheduledStopUnix (99.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-363822 --memory=3072 --driver=docker  --container-runtime=crio
E1212 00:25:59.760632   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-363822 --memory=3072 --driver=docker  --container-runtime=crio: (22.308232246s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-363822 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 00:26:06.328028  195121 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:26:06.328120  195121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:26:06.328128  195121 out.go:374] Setting ErrFile to fd 2...
	I1212 00:26:06.328132  195121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:26:06.328332  195121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:26:06.328584  195121 out.go:368] Setting JSON to false
	I1212 00:26:06.328670  195121 mustload.go:66] Loading cluster: scheduled-stop-363822
	I1212 00:26:06.328972  195121 config.go:182] Loaded profile config "scheduled-stop-363822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:26:06.329037  195121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/config.json ...
	I1212 00:26:06.329209  195121 mustload.go:66] Loading cluster: scheduled-stop-363822
	I1212 00:26:06.329302  195121 config.go:182] Loaded profile config "scheduled-stop-363822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-363822 -n scheduled-stop-363822
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-363822 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 00:26:06.704741  195274 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:26:06.704972  195274 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:26:06.704980  195274 out.go:374] Setting ErrFile to fd 2...
	I1212 00:26:06.704984  195274 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:26:06.705173  195274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:26:06.705370  195274 out.go:368] Setting JSON to false
	I1212 00:26:06.705558  195274 daemonize_unix.go:73] killing process 195156 as it is an old scheduled stop
	I1212 00:26:06.705667  195274 mustload.go:66] Loading cluster: scheduled-stop-363822
	I1212 00:26:06.705978  195274 config.go:182] Loaded profile config "scheduled-stop-363822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:26:06.706056  195274 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/config.json ...
	I1212 00:26:06.706230  195274 mustload.go:66] Loading cluster: scheduled-stop-363822
	I1212 00:26:06.706319  195274 config.go:182] Loaded profile config "scheduled-stop-363822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1212 00:26:06.711918   14503 retry.go:31] will retry after 109.421µs: open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/pid: no such file or directory
I1212 00:26:06.713074   14503 retry.go:31] will retry after 166.652µs: open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/pid: no such file or directory
I1212 00:26:06.714205   14503 retry.go:31] will retry after 296.702µs: open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/pid: no such file or directory
I1212 00:26:06.715353   14503 retry.go:31] will retry after 469.063µs: open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/pid: no such file or directory
I1212 00:26:06.716520   14503 retry.go:31] will retry after 530.139µs: open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/pid: no such file or directory
I1212 00:26:06.717639   14503 retry.go:31] will retry after 573.619µs: open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/pid: no such file or directory
I1212 00:26:06.718763   14503 retry.go:31] will retry after 704.292µs: open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/pid: no such file or directory
I1212 00:26:06.719900   14503 retry.go:31] will retry after 1.396674ms: open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/pid: no such file or directory
I1212 00:26:06.722144   14503 retry.go:31] will retry after 2.700268ms: open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/pid: no such file or directory
I1212 00:26:06.725329   14503 retry.go:31] will retry after 3.743341ms: open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/pid: no such file or directory
I1212 00:26:06.729533   14503 retry.go:31] will retry after 4.65818ms: open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/pid: no such file or directory
I1212 00:26:06.734723   14503 retry.go:31] will retry after 11.815354ms: open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/pid: no such file or directory
I1212 00:26:06.746928   14503 retry.go:31] will retry after 16.638656ms: open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/pid: no such file or directory
I1212 00:26:06.764156   14503 retry.go:31] will retry after 26.58871ms: open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/pid: no such file or directory
I1212 00:26:06.791378   14503 retry.go:31] will retry after 26.581694ms: open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-363822 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-363822 -n scheduled-stop-363822
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-363822
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-363822 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 00:26:32.535783  195915 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:26:32.536044  195915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:26:32.536055  195915 out.go:374] Setting ErrFile to fd 2...
	I1212 00:26:32.536060  195915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:26:32.536291  195915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:26:32.536532  195915 out.go:368] Setting JSON to false
	I1212 00:26:32.536603  195915 mustload.go:66] Loading cluster: scheduled-stop-363822
	I1212 00:26:32.536868  195915 config.go:182] Loaded profile config "scheduled-stop-363822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:26:32.536928  195915 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/scheduled-stop-363822/config.json ...
	I1212 00:26:32.537123  195915 mustload.go:66] Loading cluster: scheduled-stop-363822
	I1212 00:26:32.537224  195915 config.go:182] Loaded profile config "scheduled-stop-363822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-363822
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-363822: exit status 7 (77.676667ms)

                                                
                                                
-- stdout --
	scheduled-stop-363822
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-363822 -n scheduled-stop-363822
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-363822 -n scheduled-stop-363822: exit status 7 (76.324192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-363822" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-363822
E1212 00:27:22.823841   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-363822: (5.425794759s)
--- PASS: TestScheduledStopUnix (99.16s)

                                                
                                    
x
+
TestInsufficientStorage (11.48s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-333251 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-333251 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.062513047s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2d711970-22e4-4e24-936c-124718bec9da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-333251] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c46e362b-0eae-4996-8b8f-42a57bd3b0ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22101"}}
	{"specversion":"1.0","id":"96005afd-369c-4faa-a56d-339c1a362180","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"11b15f89-abd1-42be-9ddb-526fd214eee4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig"}}
	{"specversion":"1.0","id":"eec3858d-cc40-4429-8d3e-e554c04fe1e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube"}}
	{"specversion":"1.0","id":"2ad41e0f-72a4-4a89-810d-4ca884504ad0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ab9a57ca-59b5-4943-b883-5dcd9893a6d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5f00de41-d9fd-4383-a9f7-72a4a32f473a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"26912d8a-a584-462f-98bc-c4d3739911c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"de35a715-58b5-4af7-bf81-2c2baa84bd2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b0a696da-ce58-47b0-bd6f-962ae9cb6b61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2c3f7108-bd21-4588-9fa4-e874d4491df2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-333251\" primary control-plane node in \"insufficient-storage-333251\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b0cfbc91-27c5-444c-bf7a-0549d6eeb12e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"527e4b1c-29ac-4890-9d65-26d2a99bc59a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"323ddccf-1a9b-4c83-a911-8b38316c9113","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-333251 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-333251 --output=json --layout=cluster: exit status 7 (276.196884ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-333251","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-333251","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:27:32.448446  198476 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-333251" does not appear in /home/jenkins/minikube-integration/22101-10975/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-333251 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-333251 --output=json --layout=cluster: exit status 7 (277.036119ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-333251","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-333251","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:27:32.726230  198585 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-333251" does not appear in /home/jenkins/minikube-integration/22101-10975/kubeconfig
	E1212 00:27:32.736241  198585 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/insufficient-storage-333251/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-333251" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-333251
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-333251: (1.861823673s)
--- PASS: TestInsufficientStorage (11.48s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (316.81s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2266094294 start -p running-upgrade-299658 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2266094294 start -p running-upgrade-299658 --memory=3072 --vm-driver=docker  --container-runtime=crio: (41.736792724s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-299658 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-299658 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m32.011355147s)
helpers_test.go:176: Cleaning up "running-upgrade-299658" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-299658
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-299658: (2.351156505s)
--- PASS: TestRunningBinaryUpgrade (316.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (302.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-605797 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-605797 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.637681834s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-605797
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-605797: (11.848844834s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-605797 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-605797 status --format={{.Host}}: exit status 7 (78.761697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-605797 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-605797 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m20.433248425s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-605797 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-605797 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-605797 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (77.685243ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-605797] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-605797
	    minikube start -p kubernetes-upgrade-605797 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6057972 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-605797 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-605797 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-605797 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.114594638s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-605797" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-605797
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-605797: (2.419285222s)
--- PASS: TestKubernetesUpgrade (302.66s)

                                                
                                    
x
+
TestMissingContainerUpgrade (70.04s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1554892076 start -p missing-upgrade-038405 --memory=3072 --driver=docker  --container-runtime=crio
E1212 00:30:59.759747   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1554892076 start -p missing-upgrade-038405 --memory=3072 --driver=docker  --container-runtime=crio: (19.928756675s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-038405
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-038405: (10.445187276s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-038405
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-038405 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-038405 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.593370207s)
helpers_test.go:176: Cleaning up "missing-upgrade-038405" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-038405
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-038405: (2.385797787s)
--- PASS: TestMissingContainerUpgrade (70.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                    
x
+
TestPause/serial/Start (85.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-108809 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-108809 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m25.847809462s)
--- PASS: TestPause/serial/Start (85.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (306s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1438183441 start -p stopped-upgrade-148693 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1212 00:27:41.374219   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1438183441 start -p stopped-upgrade-148693 --memory=3072 --vm-driver=docker  --container-runtime=crio: (42.08517206s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1438183441 -p stopped-upgrade-148693 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1438183441 -p stopped-upgrade-148693 stop: (1.91361634s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-148693 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-148693 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m22.001757115s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (306.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-131237 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-131237 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (70.848671ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-131237] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (22.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-131237 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-131237 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.481744906s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-131237 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (22.82s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-108809 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-108809 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.49107972s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-129742 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-129742 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (180.122609ms)

                                                
                                                
-- stdout --
	* [false-129742] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:29:19.361459  223369 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:29:19.361589  223369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:19.361597  223369 out.go:374] Setting ErrFile to fd 2...
	I1212 00:29:19.361601  223369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:19.361776  223369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-10975/.minikube/bin
	I1212 00:29:19.362202  223369 out.go:368] Setting JSON to false
	I1212 00:29:19.363324  223369 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4305,"bootTime":1765495054,"procs":385,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:29:19.363381  223369 start.go:143] virtualization: kvm guest
	I1212 00:29:19.365149  223369 out.go:179] * [false-129742] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:29:19.366565  223369 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:29:19.366599  223369 notify.go:221] Checking for updates...
	I1212 00:29:19.368426  223369 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:29:19.369562  223369 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-10975/kubeconfig
	I1212 00:29:19.370587  223369 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-10975/.minikube
	I1212 00:29:19.371598  223369 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:29:19.372652  223369 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:29:19.374042  223369 config.go:182] Loaded profile config "NoKubernetes-131237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:29:19.374193  223369 config.go:182] Loaded profile config "running-upgrade-299658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1212 00:29:19.374291  223369 config.go:182] Loaded profile config "stopped-upgrade-148693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1212 00:29:19.374397  223369 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:29:19.401222  223369 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1212 00:29:19.401374  223369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:29:19.468334  223369 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 00:29:19.45233655 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 00:29:19.468534  223369 docker.go:319] overlay module found
	I1212 00:29:19.471351  223369 out.go:179] * Using the docker driver based on user configuration
	I1212 00:29:19.472593  223369 start.go:309] selected driver: docker
	I1212 00:29:19.472612  223369 start.go:927] validating driver "docker" against <nil>
	I1212 00:29:19.472628  223369 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:29:19.474297  223369 out.go:203] 
	W1212 00:29:19.475516  223369 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1212 00:29:19.478886  223369 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-129742 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-129742

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-129742

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-129742

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-129742

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-129742

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-129742

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-129742

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-129742

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-129742

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-129742

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-129742

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-129742" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-129742" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 00:29:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-131237
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 00:28:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: running-upgrade-299658
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 00:28:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: stopped-upgrade-148693
contexts:
- context:
cluster: NoKubernetes-131237
extensions:
- extension:
last-update: Fri, 12 Dec 2025 00:29:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-131237
name: NoKubernetes-131237
- context:
cluster: running-upgrade-299658
user: running-upgrade-299658
name: running-upgrade-299658
- context:
cluster: stopped-upgrade-148693
user: stopped-upgrade-148693
name: stopped-upgrade-148693
current-context: NoKubernetes-131237
kind: Config
users:
- name: NoKubernetes-131237
user:
client-certificate: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/client.crt
client-key: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/client.key
- name: running-upgrade-299658
user:
client-certificate: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/running-upgrade-299658/client.crt
client-key: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/running-upgrade-299658/client.key
- name: stopped-upgrade-148693
user:
client-certificate: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/stopped-upgrade-148693/client.crt
client-key: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/stopped-upgrade-148693/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-129742

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-129742"

                                                
                                                
----------------------- debugLogs end: false-129742 [took: 3.28598378s] --------------------------------
helpers_test.go:176: Cleaning up "false-129742" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-129742
--- PASS: TestNetworkPlugins/group/false (3.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-131237 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-131237 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.327968822s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-131237 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-131237 status -o json: exit status 2 (310.073858ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-131237","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-131237
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-131237: (2.038637825s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-131237 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-131237 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.006631413s)
--- PASS: TestNoKubernetes/serial/Start (6.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22101-10975/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-131237 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-131237 "sudo systemctl is-active --quiet service kubelet": exit status 1 (270.440031ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (16.541598801s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.822115476s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-131237
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-131237: (1.273659715s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-131237 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-131237 --driver=docker  --container-runtime=crio: (6.337368161s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-131237 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-131237 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.384683ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-148693
E1212 00:32:41.373599   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (49.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.569719764s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (49.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (49.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (49.067303042s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (49.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1212 00:33:36.949210   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-896350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (41.758250273s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-743506 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [1a0e8330-9dea-4063-9369-234ee8e6ef43] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [1a0e8330-9dea-4063-9369-234ee8e6ef43] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.00275781s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-743506 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-675290 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [9ea911e6-9d84-479e-80ec-f198c0da93b7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [9ea911e6-9d84-479e-80ec-f198c0da93b7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.002912245s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-675290 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (15.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-743506 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-743506 --alsologtostderr -v=3: (15.978205767s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (15.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-858659 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [f82842ad-b3b7-41c5-a1cf-a78ae8f92ea1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [f82842ad-b3b7-41c5-a1cf-a78ae8f92ea1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003414001s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-858659 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-675290 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-675290 --alsologtostderr -v=3: (18.117772991s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-858659 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-858659 --alsologtostderr -v=3: (16.175055055s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-743506 -n old-k8s-version-743506
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-743506 -n old-k8s-version-743506: exit status 7 (79.467093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-743506 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (25.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-743506 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (25.455349633s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-743506 -n old-k8s-version-743506
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (25.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675290 -n no-preload-675290
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675290 -n no-preload-675290: exit status 7 (75.754696ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-675290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (26.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-675290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (25.994001332s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675290 -n no-preload-675290
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (26.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-858659 -n embed-certs-858659
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-858659 -n embed-certs-858659: exit status 7 (114.168935ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-858659 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (46.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-858659 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (46.349575818s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-858659 -n embed-certs-858659
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (46.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-jhg57" [d0734f6b-f43b-4c8f-a510-cb132816b525] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-jhg57" [d0734f6b-f43b-4c8f-a510-cb132816b525] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.004673237s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-jhg57" [d0734f6b-f43b-4c8f-a510-cb132816b525] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003582012s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-743506 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-zdhfk" [6274d366-4620-4ac5-acd2-460f482d20eb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002806286s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-743506 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-zdhfk" [6274d366-4620-4ac5-acd2-460f482d20eb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002547346s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-675290 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-675290 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-079970 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-079970 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (41.197685747s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (24.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-821472 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-821472 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (24.68273071s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (24.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-4fw4k" [f2596c32-24e3-46ba-946a-60b89b5e73dc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004526104s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-4fw4k" [f2596c32-24e3-46ba-946a-60b89b5e73dc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002858093s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-858659 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-858659 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-821472 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-821472 --alsologtostderr -v=3: (8.000776691s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.123631064s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-821472 -n newest-cni-821472
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-821472 -n newest-cni-821472: exit status 7 (76.9104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-821472 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-821472 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-821472 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (9.740717449s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-821472 -n newest-cni-821472
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-079970 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [dac1f5b6-efb8-48cc-90f3-3ba30e837989] Pending
helpers_test.go:353: "busybox" [dac1f5b6-efb8-48cc-90f3-3ba30e837989] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [dac1f5b6-efb8-48cc-90f3-3ba30e837989] Running
E1212 00:35:44.442551   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003832988s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-079970 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-821472 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-079970 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-079970 --alsologtostderr -v=3: (18.143810788s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1212 00:35:59.759679   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/functional-155345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (42.152350194s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079970 -n default-k8s-diff-port-079970
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079970 -n default-k8s-diff-port-079970: exit status 7 (85.944838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-079970 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-079970 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-079970 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (52.849992561s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079970 -n default-k8s-diff-port-079970
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-129742 "pgrep -a kubelet"
I1212 00:36:10.976073   14503 config.go:182] Loaded profile config "auto-129742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-129742 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-xlbh5" [809cbe83-4c67-4ef9-a2aa-577641a88103] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-xlbh5" [809cbe83-4c67-4ef9-a2aa-577641a88103] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004233478s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-129742 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-129742 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-129742 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-qprw7" [600cae03-a5c5-4b4a-96fe-5e158fd32d19] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004055264s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (59.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (59.279668959s)
--- PASS: TestNetworkPlugins/group/calico/Start (59.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-129742 "pgrep -a kubelet"
I1212 00:36:44.116459   14503 config.go:182] Loaded profile config "kindnet-129742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-129742 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-qdlfr" [9b913b30-df20-419a-874c-f3320682881b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-qdlfr" [9b913b30-df20-419a-874c-f3320682881b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003317658s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-129742 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-129742 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-129742 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (49.099643131s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-262gm" [fe59b7aa-0847-41f3-91c1-ae866c1c2c9d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005275517s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-262gm" [fe59b7aa-0847-41f3-91c1-ae866c1c2c9d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003371861s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-079970 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-079970 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (62.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m2.869279382s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (62.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (47.953149034s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-r8n5z" [a0f09a0a-05cd-4c74-944e-59e2f024587d] Running
E1212 00:37:41.374139   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/addons-758245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003368642s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-129742 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-129742 "pgrep -a kubelet"
I1212 00:37:44.466213   14503 config.go:182] Loaded profile config "custom-flannel-129742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-129742 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-r8nqn" [6c7de78a-a180-46d4-9919-2e9f6177a083] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
I1212 00:37:44.700717   14503 config.go:182] Loaded profile config "calico-129742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
helpers_test.go:353: "netcat-cd4db9dbf-r8nqn" [6c7de78a-a180-46d4-9919-2e9f6177a083] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003428572s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-129742 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-qjccs" [05614bc5-d730-4bab-9005-40d9961341d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-qjccs" [05614bc5-d730-4bab-9005-40d9961341d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003181346s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-129742 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-129742 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-129742 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-129742 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-129742 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-129742 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-b6b8t" [67040788-21dc-48ff-a390-f92c47ccd0b8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003887422s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-129742 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m3.306659302s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-129742 "pgrep -a kubelet"
I1212 00:38:16.689983   14503 config.go:182] Loaded profile config "flannel-129742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-129742 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-fjqtd" [00cfcaef-cc5a-456c-8689-3c7e348edaad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-fjqtd" [00cfcaef-cc5a-456c-8689-3c7e348edaad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003739499s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-129742 "pgrep -a kubelet"
I1212 00:38:17.454019   14503 config.go:182] Loaded profile config "enable-default-cni-129742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-129742 replace --force -f testdata/netcat-deployment.yaml
I1212 00:38:17.865363   14503 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-7c2jp" [02346c3b-a553-4b71-96bb-b08af0202721] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-7c2jp" [02346c3b-a553-4b71-96bb-b08af0202721] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.010545867s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-129742 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-129742 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-129742 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-129742 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-129742 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-129742 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-129742 "pgrep -a kubelet"
I1212 00:39:17.983787   14503 config.go:182] Loaded profile config "bridge-129742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-129742 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-pgvss" [d69dbe7d-78bb-40d7-b123-578bfbf5fe6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-pgvss" [d69dbe7d-78bb-40d7-b123-578bfbf5fe6e] Running
E1212 00:39:22.108959   14503 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/old-k8s-version-743506/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003168319s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-129742 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-129742 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-129742 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                    

Test skip (34/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
146 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
148 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
370 TestStartStop/group/disable-driver-mounts 0.18
379 TestNetworkPlugins/group/kubenet 3.56
388 TestNetworkPlugins/group/cilium 3.6
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-039387" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-039387
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-129742 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-129742

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-129742

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-129742

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-129742

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-129742

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-129742

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-129742

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-129742

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-129742

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-129742

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-129742

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-129742" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-129742" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 00:28:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: running-upgrade-299658
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 00:28:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: stopped-upgrade-148693
contexts:
- context:
cluster: running-upgrade-299658
user: running-upgrade-299658
name: running-upgrade-299658
- context:
cluster: stopped-upgrade-148693
user: stopped-upgrade-148693
name: stopped-upgrade-148693
current-context: ""
kind: Config
users:
- name: running-upgrade-299658
user:
client-certificate: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/running-upgrade-299658/client.crt
client-key: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/running-upgrade-299658/client.key
- name: stopped-upgrade-148693
user:
client-certificate: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/stopped-upgrade-148693/client.crt
client-key: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/stopped-upgrade-148693/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-129742

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-129742"

                                                
                                                
----------------------- debugLogs end: kubenet-129742 [took: 3.380879878s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-129742" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-129742
--- SKIP: TestNetworkPlugins/group/kubenet (3.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-129742 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-129742

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-129742

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-129742

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-129742

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-129742

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-129742

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-129742

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-129742

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-129742

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-129742

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-129742

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-129742" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-129742

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-129742

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-129742

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-129742

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-129742" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-129742" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 00:29:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-131237
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 00:28:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: running-upgrade-299658
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-10975/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 00:28:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: stopped-upgrade-148693
contexts:
- context:
cluster: NoKubernetes-131237
extensions:
- extension:
last-update: Fri, 12 Dec 2025 00:29:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-131237
name: NoKubernetes-131237
- context:
cluster: running-upgrade-299658
user: running-upgrade-299658
name: running-upgrade-299658
- context:
cluster: stopped-upgrade-148693
user: stopped-upgrade-148693
name: stopped-upgrade-148693
current-context: NoKubernetes-131237
kind: Config
users:
- name: NoKubernetes-131237
user:
client-certificate: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/client.crt
client-key: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/NoKubernetes-131237/client.key
- name: running-upgrade-299658
user:
client-certificate: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/running-upgrade-299658/client.crt
client-key: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/running-upgrade-299658/client.key
- name: stopped-upgrade-148693
user:
client-certificate: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/stopped-upgrade-148693/client.crt
client-key: /home/jenkins/minikube-integration/22101-10975/.minikube/profiles/stopped-upgrade-148693/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-129742

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-129742" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129742"

                                                
                                                
----------------------- debugLogs end: cilium-129742 [took: 3.442039019s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-129742" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-129742
--- SKIP: TestNetworkPlugins/group/cilium (3.60s)

                                                
                                    
Copied to clipboard